Seems like I really like this C99 idiom. No reason not to generalize it
do snprintf(). Introduce mp_tprintf(), which basically this idiom to
snprintf(). This macro looks like it returns a string that was allocated
with alloca() on the caller site, except it's portable C99/C11. (And
unlike alloca(), the result is valid only within block scope.)
Use it in 2 places in the vo_opengl code. But it has the potential to
make a whole bunch of weird looking code look slightly nicer.
Can be enabled via --vd-lavc-dr=yes. See manpage additions for what it
does.
This reminds of the MPlayer -dr flag, but the implementation is
completely different. It's the same basic concept: letting the decoder
render into a GPU buffer to avoid a copy. Unlike MPlayer, this doesn't
try to go through filters (libavfilter doesn't support this anyway).
Unless a filter can work in-place, DR will be silently disabled. MPlayer
had very complex semantics about buffer types and management (which
apparently nobody ever understood) and weird restrictions that mostly
limited it to mpeg2 style codecs. The mpv code does not do any of this,
and just lets the decoder allocate an arbitrary number of untyped
images. (No MPlayer code was used.)
Parts of the code based on work by atomnuker (starting point for the
generic code) and haasn (some GL definitions, some basic PBO code, and
correct fencing).
In addition to using the new VAO mechanism introduced in the previous
commit, this tries to keep the OSD code self-contained. This doesn't
work all too well (because of the pass and CMS stuff), but it's still
better than before.
This removes VAO handling from video.c. Instead the shader cache will
create the VAO as needed. The consequence is that this creates a VAO
per shader, which might be a bit wasteful, but doesn't matter anyway.
Reduce this to 1 draw call per OSD pass. This removes the need for some
annoying special handling regarding 3D video support (we supported
duplicating the OSD/subtitles for side-by-side 3D output etc.).
Remove the unneeded texture sampler uniform thing.
These are apparently expensive on some drivers which are not smart
enough to turn x/42 into x*1.0/42. So, do it for them.
My great test framework says it's okay
Performance seems pretty much unchanged but I no longer get nasty spikes
on NUMA systems, probably because glBufferSubData runs in the driver or
something.
As a simplification of the code, we also just size the PBO to always
have the full size, even for cropped textures. This seems slower but not
by relevant amounts, and only affects e.g. --vf=crop. It also slightly
increases VRAM usage for textures with big strides.
This new code path is especially nice because it no longer depends on
GL_ARB_map_buffer_range, and no longer uses any functions that can
possibly fail, thus simplifying control flow and seemingly deprecating
the manpage's claim about possible image corruption.
In theory we could also reduce NUM_PBO_BUFFERS since it doesn't seem
like we're streaming uploads anyway, but leave it in there just in
case some drivers disagree...
STREAM is better than DYNAMIC because we're only using it once per
frame. As for COPY vs DRAW, that was pretty much incorrect to begin with
- but surprisngly, COPY is actually faster (sometimes significantly so,
e.g. on my NUMA system).
After testing, the best I can gather is that it has to do with the fact
that COPY requires fewer redundant memcpy()s, and also 3x reduce RAM
bandwidth (in theory).
Anyway, that bit shouldn't introduce any regressions, it's just a
documentation update. Maybe I'll change my mind about the comment again
the future, it's really hard to tell. Vulkan, please save us!
Instead of allocating three PBOs and cycling through them, we allocate
one PBO that's three times as large, and cycle through the subregion
offsets.
This results in arguably simpler code and faster initialization
performance. Especially for 4K textures, initializing PBOs can take
quite some time (e.g. 180ms -> 110ms). For 1080p, it's more like 66ms ->
52ms for me.
The alignment to 4096 is completely unnecessary by spec, but we do it
anyway just for peace of mind.
This is unnecessary to call from gl_video_resize, because the hooks only
(possibly) change when the actual vo_opengl options change. This used to
be required back when mpv still had prescaling built in, but since that
was all moved to user shaders and the code removed, this is a left-over
artifact.
The renderer code doesn't list a fixed set of supported formats, but
supports anything that is described by AVPixFmtDescriptor and follows a
number of constraints.
Plane order is not included in those constraints. This means the planes
could be in random order, rather than what the vo_opengl renderer
happens to assume. For example, it assumes that the 4th plane is alpha,
even though alpha could be on any plane. Likewise it assumes that plane
0 was always luma, and planes 2/3 chroma. (In earlier iterations of
format support, this was guaranteed by MP_IMGFLAG_YUV_P, but this is not
used anymore.)
Explicitly set the plane semantics (enum plane_type) by the component
descriptors and the used colorspace. The behavior should mostly not
change, but it's less likely to break when FFmpeg adds new pixel
formats.
With some newer ANGLE builds, mapping can fail with "Failed to create
EGL surface" during playback. The reason is unknown, and it might just
be an ANGLE bug. Probe whether it works at init time to avoid the
problem.
In commit 6eb0bbe this was changed from xs[n] to use gl_format.chroma_w
indiscriminately, which broke chroma rendering when zooming/cropping.
The solution is to only use chroma_w for chroma planes.
Fixes#4592.
On optional hook points, we store to a temp FBO and then read from it
again to complete any operations that may still be left (e.g.
sigmoidization after MAIN/LINEAR).
In theory this mechanism should be reworked to avoid the temporary FBO
until the next time we actually need one - and also skip redundant
passes if we the next thing we need *is* a FBO - but both are those are
tricky. Anyway, in the meantime, at least we can label the
(semi-)redundant passes that get generated when using user shaders.
This just indicates a fixed linear coefficient to multiply into the
signal, similar to the old option --target-brightness (but the inverse
thereof). Good for testing purposes, which is why I added it. (This also
corresponds somewhat to what zimg does)
It's now possible to request non-dumb mode as a user, even when not
using any non-dumb features. This change is mostly intended for testing,
so I can easily switch between dumb and non-dumb mode on default
settings. The default behavior is unaffected.
This backend is selected if vaapi is available, but vaapi-over-EGL is
not. This causes various issues around the forced RGB conversion, which
is done with fixed, usually incorrect parameters.
It seems the existing auto probing check is too weak, and doesn't really
prevent it from getting loaded. Fix this by adding a flag to not ever
load this during auto probing.
I'm still not deleting it, because it's useful for testing on nvidia
machines.
See #4555.
The current algorithm blew up when the color was negative, such as the
case when downscaling with dscale=mitchell or other algorithms that
introduce negative ringing. The simplest solution is to just slightly
change the calculation to force both parameters to be in-range.
This is exposed so that bjin/mpv-prescalers can use textureGatherOffset
for performance.
Since there are now quite a lot of parameters where it isn't quite clear
why they're all defined, add a paragraph to the man page that explains
them a bit.
This helps prevent unnaturally, weirdly colorized blown out highlights
for direct images of the sunlit sky and other way-too-bright HDR
content. I was debating whether to set the default at 1.0 or 2.0, but
went with the more conservative option that preserves more detail/color.
This logic doesn't really make sense. copy_img_tex already binds the
texture, so why would we bind it a second time? Furthermore, nothing
actually uses this return value. Must have been some left-over artifact
of a previous iteration of this function. Anyway, it's harmless, just
nonsensical. So remove it.
This is more efficient on my machine (nvidia), but only when applied to
groups of exactly 4 texels. So we switch to the more efficient
textureGather for groups of 4. Some notes:
- textureGatherOffset seems to be faster than textureGather by a
non-negligible amount, but for some reason, textureOffset is still
slower than a straight-up texture
- textureGather* requires GLSL 400; and at least on nvidia, this
requires actually allocating a GL 4.0 context.
- the code in opengl/common.c that clamped the GLSL version to 330 is
deprecated, because the old user shader style has been removed
completely in the meantime
- To combat the growing complexity of the polar sampling code, we drop
the antiringing functionality from EWA shaders completely, since it
never really worked well for EWA to begin with. (Horrific artifacting)
- change asserts to silent exits
- check all pointers before use
- move the p->pass initialization code to the right place
This should hopefully cut down on the amount of crashing by making the
code fundamentally more robust, while also fixing a concrete issue where
opengl-cb failed to initialize p->pass.
This allows filter functions to be prematurely cut off once their
contributions start becoming insignificant. This effectively prevents
wasted GPU time sampling from parts of the function that are essentially
reduced to zero by the window function, providing anywhere from a 10% to
20% speedup. (5700μs -> 4700μs for me)
2f41c4e8 exposed some other edge cases as well. Globally resetting the
pass info was not the right way to go about it, because we don't know in
advance what the frame type is going to be - at least not with the
current code structure. (In principle, we could separately indicate the
frame type and the pass type and then only reset it on the first
actual pass_describe call, but that's annoying as well)
Also fixes a latent issue where p->pass was never initialized, which
broke the MP_DBG debugging code in some cases.
Since all existing code does gl_video_upload immediately followed by
pass_render_frame, we can just move the upload into pass_render_frame
itself, which arguably makes more sense anyway.
This replaces `vo-performance` by `vo-passes`, bringing with it a number
of changes and improvements:
1. mpv users can now introspect the vo_opengl passes, which is something
that has been requested multiple times.
2. performance data is now measured per-pass, which helps both
development and debugging.
3. since adding more passes is cheap, we can now report information for
more passes (e.g. the blit pass, and the osd pass). Note: we also
switch to nanosecond scale, to be able to measure these passes
better.
4. `--user-shaders` authors can now describe their own passes, helping
users both identify which user shaders are active at any given time
as well as helping shader authors identify performance issues.
5. the timing data per pass is now exported as a full list of samples,
so projects like Argon-/mpv-stats can immediately read out all of the
samples and render a graph without having to manually poll this
option constantly.
Due to gl_timer's design being complicated (directly reading performance
data would block, so we delay the actual read-back until the next _start
command), it's vital not to conflate different passes that might be
doing different things from one frame to another. To accomplish this,
the actual timers are stored as part of the gl_shader_cache's sc_entry,
which makes them unique for that exact shader.
Starting and stopping the time measurement is easy to unify with the
gl_sc architecture, because the existing API already relies on a
"generate, render, reset" flow, so we can just put timer_start and
timer_stop in sc_generate and sc_reset, respectively.
The ugliest thing about this code is that due to the need to keep pass
information relatively stable in between frames, we need to distinguish
between "new" and "redrawn" frames, which bloats the code somewhat and
also feels hacky and vo_opengl-specific. (But then again, this entire
thing is vo_opengl-specific)
For some braindead reason, Microsoft decided to prevent you from
dynamically loading system libraries. This makes portability harder.
And we're talking about portability between Microsoft OSes!
This partially reverts the change from a longer time ago to always build
DXVA2 and D3D11VA together.
To make it simpler, we change the following:
- building with ANGLE headers is now required to build D3D hwaccels
- if DXVA2 is enabled, D3D11VA is still forcibly built
- the CLI vo_opengl ANGLE backend is now under --egl-angle-win32
This is done to reduce the dependency mess slightly.
Another legacy annoyance. The only place where packed YUV is still
important is slightly older Apple hardware or drivers, which require
it for efficient hardware decoding.
Instead of setting up a weird swizzle (which is linked to how the
internal renderer code works, rather than the generic format code), add
per-component mapping to gl_imgfmt_desc.
The renderer still computes the weird swizzle, but at least it's
confined to itself. Also, it appears the hwdec backends don't need this
anymore.
It's really nice that the messy init_format() goes away too.
The changes to path list options is basically getting rid of the need to
pass multiple paths to a single option. Instead, you can use the option
multiple times. The old behavior can be used by using the -set suffix
with the option.
Change some options to path lists. For example --script is now append by
default, and if you use --script-set, you need to use ":"/";" as
separator instead of ",".
--sub-paths/--audio-file-paths is a deprecated alias now, and will break
if the user tries to pass multiple paths to it. I'm assuming that if
these are used, most users will pass only 1 path anyway.
--opengl-shaders has more compatibility handling, since it's probably
rather common that users pass multiple options to it.
Also document all that in the manpage.
I'll probably regret this later, as it somewhat increases the complexity
of the option parser, rather than increasing it.
Add something that allows is to extract the component order from various
RGBA formats. In fact, also handle YUV, GBRP, and XYZ formats with this.
It introduces a new struct mp_regular_imgfmt, that hopefully will
eventually replace struct mp_imgfmt_desc. The latter is still needed by
a lot of code though, especially generic code. Also vo_opengl still uses
the old one, so this commit is sort of incomplete.
Due to its genericness, it's also possible that this commit introduces
rendering bugs, or accepts formats it shouldn't accept.
Commit 3fb6380 was supposed to increase MAX_TEXTURE_HOOKS but instead
increased SHADER_MAX_HOOKS, since I forgot that they were separate (for
whatever reason).
To prevent this mistake from happening again, and to unify the location
in which user_shader-specific #defines are placed, get rid of the two
constants in opengl/video.c and move/reuse them from user_shaders.h
instead.
Also bump up MAX_SAVED_TEXTURES (now SHADER_MAX_SAVED) slightly as a
precaution against adding more passes to vo_opengl. I think we're
already flirting with the limit.
This is even better at preventing discoloration than tone mapping on the
XYZ image. Partly inspired by the HLG OOTF. Also simplifies the way we
tone map, and moves this logic to the pass_tone_map function where it
belongs.
This also fixes what could arguably be considered a bug in the HLG
implementation when using HLG for non-BT.2020 colorspaces, which is not
permitted by spec but thinkable in theory. Although in this case, I
guess it will be arbitrary whether people use the BT.2020-normalized
luma coefficients or change it to fit the colorspace, so I guess either
way could be considered "right", depending on what people end up doing.
Either way, in lieue of standard practice, we do what makes the most
sense (to me), and hopefully others will follow.
The downside is that we upload an extra vec3 uniform even if we don't
use it, but eliminating that would be ugly.