The new uniforms introduced by 362015c have exceeded the uniform limit
when using high-radius tscale. In addition, the SC limit of 32 entries
might be pushing it with user shaders.
Just make these value a bigger to delay the onset of this same failure
mode. Maybe in the future it should be reworked to grow dynamically?
Either way, we *can* always predict a static upper bound on the number
of uniforms and shader cache entries, it's just that we forgot to do so.
Fixes#3151
This makes it so that users with actual HDR displays can just set their
config to target-trc=st2084 and get native HDR output. This will look a
bit silly for SDR content (everything will be really bright), but for
lack of a better tone mapping situation (including reverse tone mapping)
this is the easiest thing to do for now.
Ideally the brightness metadata should be part of the colorspace struct
or something (with mpv always adapting where necessary), but it depends
on the TRC and not the primaries so it's a bit more complicated than
that.
Since dumb mode is affected by tone mapping (which I'll call a feature,
not a bug), we need to copy over the configuration - in particular, the
defaults. (To prevent a render failure)
Since HDR content is now auto-detected as such, we should probably do
something smarter in the "no configuration" case, such as outputting
gamma 2.2 instead.
This decision will affect the majority of users of stock configurations
who just play back appropriately tagged HDR files, so having a good
default behavior is important. "Output the HDR content as-is" is
definitely not likely to give the user a good result.
This now lets us auto-detect appropriately tagged HDR content using
FFmpeg's new TRC entries (when available).
Hidden behind an #if because Libav stable doesn't have it yet.
Make it dynamic and never remove entries from it.
For now, this is better than possibly creating dangling pointers all
over the place in the gl_user_shader struct.
Untested.
This is now a configurable option, with tunable parameters.
I got inspiration for these algorithms off wikipedia. "simple" seems to
work pretty well, but not well enough to make it a reasonable default.
Some other notable candidates:
- Local functions (e.g. based on local contrast or gradient)
- Clamp with soft knee (linear up to a point)
- Mapping in CIE L*Ch. Map L smoothly, clamp C and h.
- Color appearance models
These will have to be implemented some other time.
Note that the parameter "peak_src" to pass_tone_map should, in
principle, be auto-detected from the SEI information of the source file
where available. This will also have to be implemented in a later
commit.
Due to the way color management in mpv worked historically, the subtitle
blending function was written to preserve the linearity of the input.
(In the past, the 3DLUT function required linear inputs)
Since the 3DLUT was refactored to accept the video color directly, the
re-linearization after blending is now virtually always redundant.
(Notably, it's also redundant when CMS is turned off, so this way of
writing the code stopped making sense a long time ago. It is a remnant
from before the pass_colormanage function was as flexible as it is now)
Currently, this relies on the user manually entering their display
brightness (since we have no way to detect this at runtime or from ICC
metadata). The default value of 250 was picked by looking at ~10 reviews
on tftcentral.co.uk and realizing they all come with around 250 cd/m^2
out of the box. (In addition, ITU-R Rec. BT.2022 supports this)
Since there is no metadata in FFmpeg to indicate usage of this TRC, the
only way to actually play HDR content currently is to set
``--vf=format=gamma=st2084``. (It could be guessed based on SEI, but
this is not implemented yet)
Incidentally, since SEI is ignored, it's currently assumed that all
content is scaled to 10,000 cd/m^2 (and hard-clipped where out of
range). I don't see this assumption changing much, though.
As an unfortunate consequence of the fact that we don't know the display
brightness, mixed with the fact that LittleCMS' parametric tone curves
are not flexible enough to support PQ, we have to build the 3DLUT
against gamma 2.2 if it's used. This might be a good thing, though,
consdering the PQ source space is probably not fantastic for
interpolation either way.
Partially addresses #2572.
This is much more readable than hard-coding magic IDs all over the file,
and removes the need for all the explanatory comments that were a direct
result of this.
This macro takes care of rotation, swizzling, integer conversion and
normalization automatically. I found the performance impact to be
nonexistant for superxbr and debanding, although rotation *did* have an
impact due to the extra matrix multiplication. (So it gets skipped where
possible)
All of the internal hooks have been rewritten to use this new mechanism,
and the prescaler hooks have finally been separated from each other.
This also means the prescale FBO kludge is no longer required.
This fixes image corruption for image formats like 0bgr, and also fixes
prescaling under rotation. (As well as other user hooks that have
orientation-dependent access)
The "raw" attributes (tex, tex_pos, pixel_size) are still un-rotated, in
case something needs them, but ideally the hooks should be rewritten to
use the new API as much as possible. The hooked texture has been renamed
from just NAME to NAME_raw to make script authors notice the change (and
also deemphasize direct texture access).
This is also a step towards getting rid of the use_integer pass.
This replaces the previous TRANSFORM by WIDTH, HEIGHT and OFFSET where
WIDTH and HEIGHT are RPN expressions. This allows for more fine-grained
control over the output size, and also makes sure that overwriting
existing textures works more cleanly.
(Also add some more useful bstr functions)
This allows users to add their own near-arbitrary hooks to the vo_opengl
processing pipeline, greatly enhancing the flexibility of user shaders.
This enables, among other things, user shaders such as CrossBilateral,
SuperRes, LumaSharpen and many more.
To make parsing the user shaders easier, shaders are now loaded as
bstrs, and the hooks are set up during video reconfig instead of on
every single frame.
These are "sequence points" where the image could be rendered out to an
FBO, hooked, and re-loaded if any such hook exists. This is perfect for
things like the current user shaders system, as well as optional effects
like unsharp masking.
Note that since we have to pick *some* FBO to store the optionally
hooked texture, we just store it in an array indexed by an increasing
counter. Since we only ever store as many as MAX_TEXTURE_HOOKS + all
internal hook points entries, this is guaranteed to be enough space.
This commit also removes some of the now unused FBOs.
The hook mechanism allows arbitrary processing stages to get dispatched
whenever certain named textures have been "finalized" by the code.
This is mostly meant to serve as a change that opens up the internal
processing in pass_read_video to user scripts, but as a side benefit all
of the code dealing with offsets and plane alignment and other such
confusing things has been rewritten.
This hook mechanism is powerful enough to cover the needs of both
debanding and prescaling (and more), so as a result they can be removed
from pass_read_video entirely and implemented through hooks.
Some avenues for optimization:
- The prescale hook is currently somewhat distributed code-wise. It might be
cleaner to split it into superxbr and NNEDI3 hooks which can each be
self-contained.
- It might be possible to move a large part of the hook code out to an
external file (including the hook definitions for debanding and
prescaling), which would be very much desired.
- Currently, some stages (chroma merging, integer conversion) will
*always* run even if unnecessary. I'm planning another series of
refactors (deferred img_tex) to allow dropping unnecessary shader
stages like these, but that's probably some ways away. In the meantime
it would be doable to re-add some of the logic to skip these stages if
we know we don't need them.
- More hook locations could be added (?)
Instead of rounding down, we round to the nearest float. This reduces
the maximum possible error introduced by this rounding operation. Also
clarify the comment.
Remove non-texture_rg compatibility from LUT sampling. OpenGL without
texture_rg support will always trigger dumb-mode, and dumb-mode does not
use LUTs. It used not to, and that was when this made sense.
Fixes broken colors with --vf=format=0bgr (but only if deband is
disabled).
0bgr means the first byte is padding, while the following three bytes
are bgr. From the vo_opengl perspective, it has 4 physical components
with 3 logical components. copy_img_tex() simply copied 3 components
from the physical representation, which means the last component (r) was
sliced off.
Fix this by not using p->color_swizzle for packed formats, and instead
let packed formats set the per-plane swizzle in texplane.swizzle. The
latter applies the swizzle as part of operation in copy_img_tex(), which
essentially moves physical to logical representations.
Unfortunately, debanding (and thus with opengl-hq defaults) is still
broken.
Make the find_plane_format function take a bit count.
This also makes the function's comment true for the first time the
function and its comment exist. (It was commented as taking bits, but
always took bytes.)
Only a few very low bit depth internal formats can be rendered to in
pure ES2 (GL_RGB565 is the "best" one).
Seems like the only potentially reasonable renderable formats in ES2
could be provided via GL_OES_rgb8_rgba8, or half-floats, so don't
bother with this at all.
Now that we know in advance whether an implementation should support a
specific format, we have more flexibility when determining which format
to use.
In particular, we can drop the roundabout ES logic.
I'm not sure if actually trying to create the FBO for probing still has
any value. But it might, so leave it for now.
Even if everything else is available, the need for first class arrays
breaks it. In theory we could fix this since we don't strictly need
them, but I guess it's not worth bothering.
Also give the misnamed have_mix variable a slightly better name.
This merges all knowledge about texture format into a central table.
Most of the work done here is actually identifying which formats exactly
are supported by OpenGL(ES) under which circumstances, and keeping this
information in the format table in a somewhat declarative way. (Although
only to the extend needed by mpv.) In particular, ES and float formats
are a horrible mess.
Again this is a big refactor that might cause regression on "obscure"
configurations.
This uses the normal autoprobing rules like "auto", but rejects anything
that isn't flagged as copying data back to system memory.
The chunk in command.c was dead code, so remove it instead of updating
it.
MSDN documents this as "Introduced in Windows 8.1.". I assume on Windows
7 this field will simply be ignored. Too bad for Windows 7 users.
Also, I'm not using D3D11_VIDEO_PROCESSOR_NOMINAL_RANGE_16_235 and
D3D11_VIDEO_PROCESSOR_NOMINAL_RANGE_0_255, because these are apparently
completely missing from the MinGW headers. (Such a damn pain.)
We don't have any reason to disable either. Both are loaded dynamically
at runtime anyway. There is also no reason why dxva2 would disappear
from libavcodec any time soon.
ANGLE is _really_ annoying to build. (Requires special toolchain and a
recent MSVC version.) This results in various issues with people
having trouble to build mpv against ANGLE (apparently linking it
against a prebuilt binary doesn't count, or using binaries from
potentially untrusted sources is not wanted).
Dynamically loading ANGLE is going to be a huge convenience. This commit
implements this, with special focus on keeping it source compatible to
a normal build with ANGLE linked at build-time.
In theory this was needed for the previous commit (but wasn't in
practice, since for hwdec the LUMINANCE_ALPHA mangling is not applied
anymore, and ANGLE uses RG textures in absence of GL_ARB_texture_rg for
whatever crazy reasons).
In practice this caused funky colors on OSX with the uyvy422 format,
which is also fixed in this commit.
This uses EGL_ANGLE_stream_producer_d3d_texture_nv12 and related
extensions to map the D3D textures coming from the hardware decoder
directly in GL.
In theory this would be trivial to achieve, but unfortunately ANGLE does
not have a mechanism to "import" D3D textures as GL textures. Instead,
an awkward mechanism via EGL_KHR_stream was implemented, which involves
at least 5 extensions and a lot of glue code. (Even worse than VAAPI EGL
interop, and very far from the simplicity you get on OSX.)
The ANGLE mechanism so far supports only the NV12 texture format, which
means 10 bit won't work. It also does not work in ES3 mode yet. For
these reasons, the "old" ID3D11VideoProcessor code is kept and used as a
fallback.
It forces es2 mode on ANGLE. Only useful for testing. Since the normal
"angle" backend already falls back to es2 if es3 does not work, this new
backend always exit when autoprobing it.
Rename gl_hwdec_driver.map_image to map_frame, and let it fill out a
struct gl_hwdec_frame describing the exact texture layout. This gives
more flexibility to what the hwdec interop can export. In particular, it
can export strange component orders/permutations and textures with
padded size. (The latter originating from cropped video.)
The way gl_hwdec_frame works is in the spirit of the rest of the
vo_opengl video processing code, which tends to put as much information
in immediate state (as part of the dataflow), instead of declaring it
globally. To some degree this duplicates the texplane and img_tex
structs, but until we somehow unify those, it's better to give the hwdec
state its own struct. The fact that changing the hwdec struct would
require changes and testing on at least 4 platform/GPU combinations
makes duplicating it almost a requirement to avoid pain later.
Make gl_hwdec_driver.reinit set the new image format and remove the
gl_hwdec.converted_imgfmt field.
Likewise, gl_hwdec.gl_texture_target is replaced with
gl_hwdec_plane.gl_target.
Split out a init_image_desc function from init_format. The latter is not
called in the hwdec case at all anymore. Setting up most of struct
texplane is also completely separate in the hwdec and normal cases.
video.c does not check whether the hwdec "mapped" image format is
supported. This should not really happen anyway, and if it does, the
hwdec interop backend must fail at creation time, so this is not an
issue.
The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.
The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)
This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.
Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.
The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.
This is a big refactor. Some things may be untested and could be broken.
When we receive the wl_shell_surface::configure event, it makes sense
to respect the aspect ratio of the video in windowed mode, but in
fullscreen it forces compositing and wastes resources (until atomic
modesetting is available everywhere and we can stop having
desynchronised planes).
Weston mitigates a resolution mismatch by creating black surfaces and
compositing them around the fullscreen surface, placed at the middle,
while GNOME puts it at the top-left and leaves the rest of the desktop
composited below, both of them producing a subpar experience.
Fixes#3021, #2657.
Add --taskbar-progress command line option and property which controls taskbar
progress indication rendering in Windows 7+. This option is on by default and
can be toggled during playback.
This option does not affect the creation process of ITaskbarList3. When the
option is turned off the progress bar is just hidden with TBPF_NOPROGRESS.
Closes#2535
The X11 error handler is global, and not per-display. If another Xlib
user exists in the process, they can conflict. In theory, it might
happen that e.g. another library sets an error handler (overwriting the
mpv one), and some time after mpv closes its display, restores the error
handler to mpv's one. To mitigate this, check if the error log instance
is actually set, instead of possibly crashing.
The change in vo_x11_uninit() is mostly cosmetic.
In order to honor the differences between OpenGL and Direct3D coordinate
systems, ANGLE uses a full FBO copy merely to flip the final frame
vertically. This can be avoided with the EGL_ANGLE_surface_orientation
extension.
I hope that this does what we expect it does: destroy the EGLDisplay
specific to our HDC. (Some implementations will terminate all EGL
contexts in the whole process.)
eglReleaseThread() merely calls eglMakeCurrent(0, 0, 0, 0), which is
not enough.
This commit also fixes the problem fixed with the previous commit,
but I think both changes are needed to make our API usage clean.
If ANGLE was probed before (but rejected), the ANGLE API can remain
"initialized", and eglGetCurrentDisplay() will return a non-NULL
EGLDisplay. Then if a native GL context is used, the ANGLE/EGL API will
then (apparently) keep working alongside native OpenGL API. Since GL
objects are just numbers, they'll simply fail to interact, and OpenGL
will get invalid textures. For some reason this will result in black
textures.
With VAAPI-EGL, something similar could happen in theory, but didn't in
practice.
Introduce hwdec-current and hwdec-interop properties.
Deprecate hwdec-detected, which never made a lot of sense, and which is
replaced by the new properties. hwdec-active also becomes useless, as
hwdec-current is a superset, so it's deprecated too (for now).
Cache misses are a normal and expected part of the operation of a cache.
It doesn't really make sense to show a user-visible warning for them.
To work-around this, just skip trying to open the cache if it doesn't
exist yet.
First of all, black point compensation is now on by default. This is
really rather harmless and only improves the result (where "improvement"
means "less black clipping").
Second, this adds an option to limit the ICC profile's contrast, which
helps for untagged matrix profiles that are implicitly black scaled even
in colorimetric intent. (Note that this relies on BPC being enabled to
work properly, which is why the two changes are tied together)
Third, this uses the LittleCMS built in black point estimator instead of
relying on the presence of accurate A2B tables. This also checks tags
and does some amounts of noise elimination.
If the option is unspecified and the profile is missing black point
information, print a warning instructing the user to set the option, and
fall back to 1000 otherwise.
The vdpau_mixer could fail to be recreated properly if preemption
occured at some point before playback initialization (like when using
--hwdec-preload and the opengl-cb API).
Normally, the vdpau_mixer was supposed to be marked invalid when the
components using it detect a preemption, e.g. in hwdec_vdpau.c. This one
didn't mark the vdpau_mixer as invalid if preemption was detected in
reinit(), only in map_image().
It's cleaner to detect preemption directly in the vdpau_mixer, which
ensures it's always recreated correctly.
The "fs-only" choice sets the _NET_WM_BYPASS_COMPOSITOR to 1 if the
window is fullscreened, and 0 otherwise. (0 is specified to be the
implicit default - i.e. no change is requested in windowed mode.)
In particular, change the default to "fs-only".
Fixes#2582.
Including initguid.h at the top of a file that uses references to GUIDs
causes the GUIDs to be declared globally with __declspec(selectany). The
'selectany' attribute tells the linker to consolidate multiple
definitions of each GUID, which would be great except that, in Cygwin
and MinGW GCC 6.1, this method of linking makes the GUIDs conflict with
the ones declared in libuuid.a.
Since initguid.h obsoletes libuuid.a in modern compilers that support
__declspec(selectany), add initguid.h to all files that use GUIDs and
remove libuuid.a from the build.
Fixes#3097
Substraction of 1 is not necessary due to .right and .bottom values of RECT
struct describing a point one pixel outside of the rectangle.
Fixes#2935. (2.b)
In particular, this moves the depth test to common code.
Should be functionally equivalent, except that for DXVA2, the
IDirectXVideoDecoderService_GetDecoderRenderTargets API is called
more often potentially.
This gives us 16 bit fixed-point integer texture formats, including
ability to sample from them with linear filtering, and using them as FBO
attachments.
The integer texture format path is still there for the sake of ANGLE,
which does not support GL_EXT_texture_norm16 yet.
The change to pass_dither() is needed, because the code path using
GL_R16 for the dither texture relies on glTexImage2D being able to
convert from GL_FLOAT to GL_R16. GLES does not allow this. This could be
trivially fixed by doing the conversion ourselves, but I'm too lazy to
do this now.
Basically this gets rid of the need for the accessors in d3d11va.h, and
the code can be cleaned up a little bit.
Note that libavcodec only defines a ID3D11VideoDecoderOutputView pointer
in the last plane pointers, but it tolerates/passes through the other
plane pointers we set.
This uses ID3D11VideoProcessor to convert the video to a RGBA surface,
which is then bound to ANGLE. Currently ANGLE does not provide any way
to bind nv12 surfaces directly, so this will have to do.
ID3D11VideoContext1 would give us slightly more control about the
colorspace conversion, though it's still not good, and not available
in MinGW headers yet.
The video processor is created lazily, because we need to have the coded
frame size, of which AVFrame and mp_image have no concept of. Doing the
creation lazily is less of a pain than somehow hacking the coded frame
size into mp_image.
I'm not really sure how ID3D11VideoProcessorInputView is supposed to
work. We recreate it on every frame, which is simple and hopefully
doesn't affect performance.
Sucks, but better than freezing forever given the (to me) unpredictable
RPI behavior. This will be good enough to drop out of vsync timing mode,
or to abort playback.
Recreate all dispmanx objects after mode changes signalled by the TV
callback. This is needed since dispmanx objects are marked as invalid
and cease working.
One important point is that the vsync callbacks will stop coming when
this happens, so restoring the callback is important.
Note that the MMAL renderer itself does not get trashed by the firmware
on such events, but we completely reconfigure it anyway when it happens.
For Mediacodec in particular we don't care about the format. It can just
decode to whatever it wants. The only case we would care about is it not
returning an opaque format if we don't have proper interop, but
libavcodec always returns non-opaque formats by default.
Use the recently added lavc_suffix mechanism to select the wrapper
decoder.
With all hwdec callbacks being optional, and RPI/Mediacodec having only
dummy callbacks, all the callbacks can be removed as well.
The result is that the vd_lavc_hwdec struct for both of them is tiny.
It's better to move them to vd_lavc.c directly, because they are so
trivial and small.
This is intended for cases when --hwdec needs to override the decoder
implementation in use, like for example on the RPI.
It does two things:
1. Allow the hwdec to indicate a decoder suffix. libavcodec by
convention adds a suffix to all wrapper decoders, and here we start
relying on it. While not necessarily the best idea, it's the only
thing we got. libavcodec's hwaccel list is useless, because it only
has the codec ID, not the associated decoder's name.
2. Make --hwdec=auto work properly. It shouldn't fail anymore, and hwdec
probing should reliably work, even if a different decoder is selected
with --vd. The semantics of --hwdec should dictate that it overrides
the default decoder.
A minor simplification. Most callers don't need this, and there's no
good reason why the caller should provide an "initializer" like this.
(This function calls mp_image_new_dummy_ref(), which has no reason
for an initializer either.)
The active texture and some pixelstore parameters are now always reset
to defaults when entering and leaving the renderer. Could be important
for libmpv.
As of ffmpeg git master, only the libavdevice decklink wrapper supports
this. Everything else has dropped support.
You're now supposed to use AV_CODEC_ID_WRAPPED_AVFRAME, which works a
bit differently. Normal AVFrames should still work for these encoders.
Since what we're doing is a linear blend of the four colors, we can just
do it for free by using GPU sampling.
This requires significantly fewer texture fetches and calculations to
compute the final color, making it much more efficient. The code is also
much shorter and simpler.
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
In the past, --video-unscaled also disabled zooming and aspect ratio
corrections. But this didn't make much sense in terms of being a useful
option. The new behavior just sets the initial video size to be
unscaled, but it's still affected by zoom commands and aspect ratio
corrections.
To get the old behavior back, --video-aspect=0 --video-zoom=0 need to be
added as well (in the general case). Most of the time it should not make
a difference though.
Also, there seems to have been some additional dst_rect clamping code
inside src_dst_split_scaling that didn't seem to either be necessary nor
ever get triggered. (The code immediately above it already makes sure to
crop the video if it's larger than the dst_rect)
No idea why it was there, but I just removed it.
Apply basic transformations like rotation by 90° and mirroring when
sampling from the source textures. The original idea was making this
part of img_tex.transform, but this didn't work: lots of code plays
tricks on the transform, so manipulating it is not necessarily
transparent, especially when width/height are switched. So add a new
pre_transform field, which is strictly applied before the normal
transform.
This fixes most glitches involved with rotating the image.
Cropping and rotation are now weirdly separated, even though they could
be done in the same step. I think this is not much of a problem, and
has the advantage that changing panscan does not trigger FBO
reallocations (I think...).
Typically happens with some implementations if no context is currrent,
or is otherwise broken. This is particularly relevant to the opengl_cb
API, because the API user will have no other indication what went wrong.
The underlying intention of this code is to make changing
--videotoolbox-format at runtime work. For this reason, the format can't
just be statically setup, but must be read from the option at runtime.
This means the format is not fixed anymore, and we have to make sure the
renderer is property reinitialized if the format changes. There is
currently no way to trigger reinit on this level, which is why the
mp_image_params.hw_subfmt field was introduced.
One sketchy thing remains: normally, the renderer is supposed to be
involved with VO format negotiation, which would ensure that the VO
can take the format at all. Since the hw_subfmt is not part of this
format negotiation, it's implied the get_vt_fmt() callback only
returns formats supported by the renderer. This is not necessarily
clear because vo_opengl checks this with converted_imgfmt separately.
None of this matters in practice though, because we know all formats
are always supported.
(This still requires somehow triggering decoder reinit to make the
change effective.)
For hwaccel formats, mp_image will merely point to a hardware surface
handle. In these cases, the mp_image_params.imgfmt field describes the
format insufficiently, because it mostly only describes the type of the
hardware format, not its underlying format.
Introduce hw_subfmt to describe the underlying format. It makes sense to
use it with most hwaccels, though for now it will be used with the
following commit only.
Until now, the presence of the process_image() callback was used to set
a delay queue with a hardcoded size. Change this to a vd_lavc_hwdec
field instead, so the decoder can explicitly set this if it's really
needed.
Do this so process_image() can be used in the VideoToolbox glue code for
something entirely unrelated.
Some functions which expected a codec name (i.e. the name of the video
format itself) were passed a decoder name. Most "native" libavcodec
decoders have the same name as the codec, so this was never an issue.
This should mean that e.g. using "--vd=lavc:h264_mmal --hwdec=mmal"
should now actually enable native surface mode (instead of doing copy-
back).
The sync-by-display mode relies on using the vsync statistics for
timing. As a consequence discontinuities must be handled somehow. Until
now we have done this by completely resetting these statistics.
This can be somewhat annoying, especially if the GL driver's vsync
timing is not ideal. So after e.g. a seek it could take a second until
it locked back to the proper values.
Change it not to reset all statistics. Some state obviously has to be
reset, because it's a discontinuity. To make it worse, the driver's
vsync behavior will also change on such discontinuities. To compensate,
we discard the timings for the first 2 vsyncs after each discontinuity
(via num_successive_vsyncs). This is probably not fully ideal, and
num_total_vsync_samples handling in particular becomes a bit
questionable.
The past behavior was a bit weird, especially when zooming out. There
was no simple way to zoom in or out in consistent increments using
keybindings alone.
The new behavior preserves most of the old behavior's semantics but
scales out to infinity better. It coincidentally also makes it
really easy to get clean power of 2 ratios (e.g. 2x, 4x, 8x and their
inverses).
Fixes#3004.
This makes the black point closer (chromatically) to the white point, by
ensuring channels keep their consistent brightness ratios as they go
down to zero.
I also raised the 3DLUT version as this changes semantics and is a
separate commit from the previous one.
This commit refactors the 3DLUT loading mechanism to build the 3DLUT
against the original source characteristics of the file. This allows us,
among other things, to use a real BT.1886 profile for the source. This
also allows us to actually use perceptual mappings. Finally, this
reduces errors on standard gamut displays (where the previous 3DLUT
target of BT.2020 was unreasonably wide).
This also improves the overall accuracy of the 3DLUT due to eliminating
rounding errors where possible, and allows for more accurate use of
LUT-based ICC profiles.
The current code is somewhat more ugly than necessary, because the idea
was to implement this commit in a working state first, and then maybe
refactor the profile loading mechanism in a later commit.
Fixes#2815.
AVFormatContext.codec is deprecated now, and you're supposed to use
AVFormatContext.codecpar instead.
Handle this for all of the normal playback code.
Encoding mode isn't touched.
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
This also draws it after color management etc. In a nutshell, this
change makes the transparency checkerboard independent of upscaling,
panning, cropping etc. It will always be the same apparent size and
position (relative to the window).
It will also be independent of the video colorspace and such things.
(Note: This might cause white imbalance issues if playing a file with a
white point that does not match the display, in absolute colorimetric
mode. But that's uncommon, especially in conjunction with transparent
image files, so it's not a primary concern here)
Until now, we've let the windowing backend decide. But since they
usually require premultiplied alpha, and premultiplied alpha is easier
to handle, hardcode it.
The recent changes fixed rotation handling, but reversed the rotation
direction. The direction is expected to be counter-clockwise, because
demuxers export video rotation metadata as such.
This has been completely broken since commit 93546f0c. But even before,
rotation handling did not make too much sense. In particular, it rotated
the contents of the cropped image, instead of adjusting the crop
rectangle as well. The result was that things like panscan or zooming
did not behave as expected with rotation applied.
The same is true for vertical flipping. Flipping is triggered by
negative image stride. OpenGL does not support flipping the image on
upload, so it's done as part of the rendering. It can be triggered with
--vf=flip, but other filters and even decoders could setup negative
stride to flip the image.
Fix these issues by applying transforms to texture coordinates properly,
and by making rotation and flipping part of these transforms.
This still doesn't work properly for separated scaling. The issue is
that we'd have to adjust how the passes are done. For now, pick a very
stupid solution by rotating the image to a FBO, and then scaling from
that. This has the avantage that the scale logic doesn't have to be
complicated for such a rare case. It could be improved later.
Prescaling is apparently still broken. I don't know if chroma
positioning works properly either. None of this should affect the case
with no rotation.
gl_transform_vec() assumed column-major, while everything else seemed to
assumed row-major memory organization for gl_transform.m. Also,
gl_transform_trans() seems to contain additional confusion.
This didn't matter until now, as everything has been orthogonal, this
the swapped matrix entries were always 0.
If the texture count is lower than 4, entries in va.textcoord[] will
remain uninitialized. While this is unlikely to be a problem (since
these values are unused on the shader side too), it's not nice and might
explain some things which have shown up in valgrind.
Fix by always initializing the whole thing.
Instead of reallocating almost all of the shader string several times
per pass, build it into a fixed buffer that will be reallocated as
needed.
While this still uses a linear search and full comparison of the shader
text, this will compare the shader's string length first before doing a
full comparison as a nice side effect. (That's also why the fragment
shader is compared first - it's more likely to be different for
different cache entries than the vertex shader stub.)
The mp_set_av_packet()/mp_pts_from_av() functions check whether the
timebase is set at all (i.e. AVRational.num!=0), so there's no need to
fiddle with pointers.
Glitches when resizing are still possible, but are reduced. Other VOs
could support this too, but don't need to do so.
(Totally avoiding glitches would be much more effort, and probably not
worth the trouble. How about you just watch the video the player is
playing, instead of spending your time resizing the window.)
Until now, we have tried to create a GL 3.0 context. The main reason for
this is that many Mesa-based drivers did not support anything better.
But some drivers (Mesa AMD) will not report a higher OpenGL version,
because their compatibility mode is restricted. While later GL features
are reported as extensions just fine, there doesn't seem to be a way to
determine or enable higher GLSL versions.
Add some more shitty hacks to try to deal with this messed up situation,
and try to probe each interesting GL version separately (starting with
3.3, then 3.2 etc.). Other backends might suffer from similar problems,
but these will have to deal with it on their own.
Probably fixes#2938, or maybe not.
Prevents an infinite loop of configure event and set_fullscreen
request on Weston and other compositors respecting the protocol.
Fixes#2817
This reverts commit eb6b2b6e50.
This colorspace has been historically used as a calibration target for
most digital projectors and sees some involvement in the UltraHD
standards, so it's a useful addition to mpv.
converted_imgfmt will be used by the renderer logic to build an
appropriate shader chain. It doesn't influence the format of any
textures. Thus it doesn't matter whether the hw video surface is mapped
as RGB or RGBA. What matters is if the video actually contains alpha or
not. Since virtually all hardware decoder do not support alpha in any
way, this can be hardcoded as "no alpha".
This avoids unnecessary GPU work.
This also gets rid of the kind of hard to read texture swizzle setup and
turns it into something dumber.
Assumes that we don't create any FBOs with 2 channel formats. (Only the
video source textures are handled by this commit.)
Previously, gl->DXOpenDeviceNV was called twice using dxva2 with dxinterop. AMD
drivers refused to allow this. With this commit, context_dxinterop sets its own
implementation of MPGetNativeDisplay, which can return either a
IDirect3DDevice9Ex or a dxinterop_device_HANDLE depending on the "name" request
string. hwdec_dxva2gldx then requests both of these avoiding the need to call
gl->DXOpenDeviceNV a second time.
Drag&drop mechanisms typically support multiple types for the drop data.
Move most of the logic which types are accepted and preferred to
event.c, where the data is also interpreted.
(Maybe sorting the types by assigning scores is over-engineered, since
they're already sorted by preference, but it's actually not much more
code.)
Not very interesting/meaningful yet, but preparation for the next
commit.
Reduces VO access and makes the code more self-contained. (One day the
windowing backend code should not access the VO anymore. We're just not
quite there yet.)
Instead of displaying it only on playback start (or after switching
tracks), always display it even after a seek.
This helps with --lavfi-complex. You can now overlay e.g. audio
visualizations over cover art, and it won't break after a seek.
The downside is that this might make seeks with huge cover art slower.
There is also a glitch on seeking: since cover art pictures always
have timestamp 0, the playback time will be 0 for a moment after seek,
and then revert to audio PTS (as video is considered EOF). This is also
due to how lavfi's overlay filter behaves. (I'm not sure how to tell
lavfi that it's just a single frame.)
Like dxinterop, this uses StretchRect or RGB conversion. This is unavoidable as
long as we use the dxva2 API, as there is no way to access the raw hardware
decoded Direct3D9 surfaces.
The default of 1.0 was basically making half the algorithm do nothing,
since it turned off all diagonal contributions. The upstream default is
0.6, and this produces a more reasonable image.
The values were changed to reflect an upstream change in the source for
the super-xBR implementation.
The anti-ringing code was basically not working at all, the new
algorithm _significantly_ improves the result (reduces ringing).
This is a fresh implementation from scratch that carries with it
significantly less baggage and verbosity from the previous (ported)
version.
The actual values for the masks and such were copied from the
current code. Behavior and performance should be unaffected.
An important difference between the old code and the new code is that
the new code always explicitly samples from the first component, rather
than being able to process multiple planes at once.
Since prescale-luma only affects luma, I deemed this unnecessary. May
change in the future, if prescale-chroma ever gets implemented. But
prescaling multiple planes would be slow to do this way. (Better would
be to generalize it to differently-sized vectors)
Deselecting cover art and then reselecting it did not work. The second
time the cover art picture is not displayed again. (This seems to break
every other month...)
The reason is commit 6640b22a. It mutates the input packet. And it is
correct that we don't own d_video->header->attached_picture at this
point. Fix it by creating a new packet reference.
Instead of hard-coding the logic and planes to skip, factor this out
to a reusible function, and instead add the number of relevant
coordinates to the texture state.
Since prescale now literally only affects the luma plane (and the
filters are all designed for luma-only operation either way), the option
has been renamed and the documentation updated to clarify this.
This is a pretty major rewrite of the internal texture binding
mechanic, which makes it more flexible.
In general, the difference between the old and current approaches is
that now, all texture description is held in a struct img_tex and only
explicitly bound with pass_bind. (Once bound, a texture unit is assumed
to be set in stone and no longer tied to the img_tex)
This approach makes the code inside pass_read_video significantly more
flexible and cuts down on the number of weird special cases and
spaghetti logic.
It also has some improvements, e.g. cutting down greatly on the number
of unnecessary conversion passes inside pass_read_video (which was
previously mostly done to cope with the fact that the alternative would
have resulted in a combinatorial explosion of code complexity).
Some other notable changes (and potential improvements):
- texture expansion is now *always* handled in pass_read_video, and the
colormatrix never does this anymore. (Which means the code could
probably be removed from the colormatrix generation logic, modulo some
other VOs)
- struct fbo_tex now stores both its "physical" and "logical"
(configured) size, which cuts down on the amount of width/height
baggage on some function calls
- vo_opengl can now technically support textures with different bit
depths (e.g. 10 bit luma, 8 bit chroma) - but the APIs it queries
inside img_format.c doesn't export this (nor does ffmpeg support it,
really) so the status quo of using the same tex_mul for all planes is
kept.
- dumb_mode is now only needed because of the indirect_fbo being in the
main rendering pipeline. If we reintroduce p->use_indirect and thread
a transform through the entire program this could be skipped where
unnecessary, allowing for the removal of dumb_mode. But I'm not sure
how to do this in a clean way. (Which is part of why it got introduced
to begin with)
- It would be trivial to resurrect source-shader now (it would just be
one extra 'if' inside pass_read_video).
Completely pointless abominations that FFmpeg refuses to remove. They
are ancient, long deprecated API which we can't use anymore. They
confused users as well.
Pretend that they don't exist. Due to the way --vd works, they can't
even be forced anymore. The older hack which explicitly rejects these
can be dropped as well.
Hr-seek was often off by one frame due to rounding issues, which have
been traditionally taken care off by adding a "tolerance". Essentially,
frames very close to the seek target PTS are not dropped, even if they
may strictly are before the seek target.
Commit 0af53353 accidentally removed this by always removing frames even
if they're within the "tolerance". Fix this by "unsharing" the logic and
making sure the segment code is inactive for normal seeks.
Why was this done so stupidly, with so many complicated special cases,
before? Declare it once so the shader bits don't have to figure out where
and when to do so themselves.
Doing --hwdec=auto ends up picking dxva2, creating a decoder, and then
sending D3D frames down the video chain, which immediately fails and
falls back to software.
Consider dxva2 only if the VO provides a context. If this fails,
autoprobing will proceed to try dxva2-copy as usual.
Fixes#2844.
This is in preparation for a hypothetical API change in libavcodec,
which would allow the decoder to return multiple video frames before
accepting a new input packet.
In theory, the body of the if() added to vd_lavc.c could be replaced
with this code:
packet->buffer += ret;
packet->len -= ret;
but currently this is not needed, as libavformat already outputs one
frame per packet. Also, using libavcodec this way could lead to a
"deadlock" if the decoder refuses to consume e.g. garbage padding, so
enabling this now would introduce bugs.
(Adding this now for easier testing, and for symmetry with the audio
code.)
There is some strange code which sets the DTS of the packet to PTS (but
only if it's not AVI), which apparently helps with timestamp
determination with some broken files. This code is annoying because it
tries to avoid mutating the packet (which it logically doesn't own).
Move it to where it does and get rid of the packet_copy mess.
Needed for the following commit.
This tries to determine whether packet PTS values are accurate and can
be used for frame dropping during seeking. Move both checks (PTS is
missing; PTs is non-monotonic) to the earliest place where they can be
done.
The WGL_NV_DX_interop spec says that a shared IDirect3DSurface9 must not
be lockable, but off-screen plain surfaces are always lockable and using
them causes Nvidia drivers to crash. Use a rendertarget for the shared
surface instead.
This also changes the name of the DX_interop handle for the rendertarget
to match the name of the DirectX object (rather than the GL one) to
match the convention used in context_dxinterop.c.
Apple crap (namely hardware decoding interop) forces us to use rectangle
textures for input. But after that we continue with normal textures.
This was not considered for debanding, and the sampler type used for it
can be different depending on the exact render chain. Simply use the
target type of the input texture.
* use mp_HRESULT_to_str/mp_LastError_to_str
* make some messages non-identical
* replace "GL" -> "OpenGL"
* change some MP_FATAL to MP_ERR that don't actually kill the vo
Apparently, some drivers require you to allocate all of the decoder d3d surfaces
at once. This commit changes the strategy from allocating surfaces as needed via
mp_image_pool_set_allocator, to allocating all the surfaces in one call to
IDirectXVideoDecoderService_CreateSurface and adding them to the pool with
mp_image_pool_add.
fixes#2822
Provide a way for the user to add mp_images to the pool. This is required for
dxva2, for which using set_allocator is extremely awkward since all the d3d9
surfaces must be allocated in advance and all together.
I mistakenly copied the wrong license text into these files when
I created them. Since I'm the only one to have touched these files,
it should be OK to change them.
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.
The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.
The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)
To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
concatenating files with completely different codecs for the sake
of EDL (which also uses the timeline infrastructure). A "lighter"
approach would try to make use of decoder mechanism to update e.g.
the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
normal playback core mechanisms like hr-seek, but now the playback
core doesn't need to care about these things.
These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)
There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.
Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.
A client API user is allowed to call mpv_opengl_cb_uninit_gl() followed
by mpv_opengl_cb_init_gl(). This crashed; fix it by fixing the lifetime
of ctx->gl.
This is required so that the individual surfaces can pass beyond the dxva2
decoder and be passed to the vo.
This also adds additional data to mp_image->planes[0] for IMGFMT_DXVA2, which is
required for maintaining and releasing the surface even if the decoder code is
uninited.
The IDirectXVideoDecoder itself is encapsulated together with its surface pool
and configuration in a dxva2_decoder structure whose creation and destruction is
managed by talloc.
Don't allow rounding to let it underflow to 0. 0 width or height is
simply not allowed and could cause problems otherwhere.
Indirectly fixes CID 1350057, which complains about not checking the
resulting output size values before using it in divisions.
It thinks that integer_conv_fbo[index] is implied to be accessed with up
to index=5. Although that is theoretical only, it has a point that this
makes no sense. Use the same constant for the array allocation, to make
it more uniform and robust.
Fixes CID 1350060.
Until now (and in mplayer traditionally), avi timestamps were handled
with a timestamp FIFO. AVI timestamps are essentially just strictly
increasing frame numbers and are not reordered like normal timestamps.
Limiting the FIFO is required because frames can be dropped. To make
it worse, frame dropping can't be distinguished from the decoder not
returning output due to increasing the buffering required for B-frames.
("Measuring" the buffering at playback start seems like an interesting
idea, but won't work as the buffering could be increased mid-playback.)
Another problem are skipped frames (packets with data, but which do
not contain a video frame).
Besides dropped and skipped frames, there is the problem that we can't
always know the delay. External decoders like MMAL are not going to
tell us. (And later perhaps others, like direct VideoToolbox usage.)
In general, this works not-well enough that I prefer the solution of
passing through AVI timestamps as DTS. This is slightly incorrect,
because most decoders treat DTS as mpeg-style timestamps, which
already include a b-frame delay, and thus will be shifted by a few
frames. This means there will be a problem with A/V sync in some
situations.
Note that the FFmpeg AVI demuxer shifts timestamps by an additional
amount (which increases after the first seek!?!?), which makes the
situation worse. It works well with VfW-muxed Matroska files, though.
On RPI, the first X timestamps are broken until the MMAL decoder "locks
on".
The ctx->redrawing field signals whether flip_page() should block. Do
not block if a black frame (i.e. nothing) is to be rendered.
Also, frame==NULL can never happen.
fd339e3f53 introduced a regression that caused
segfault while uniniting dxva2 decoder (and possibly vdpau too). The problem was
that it freed the avctx earlier, before calling the backend-specific uninit
which referenced it.
Revert some of the changes of that commit, and avoid calling flush by
checking whether the codec is open instead.
(Based on a PR by Kevin Mitchell.)
Signed-off-by: wm4 <wm4@nowhere>
The complex filter support that will be added makes much more complex
use of libavfilter, and I'm not going to bother with adding hacks to
keep libavfilter optional.
This makes it possible to set video size and position using the
--geometry and/or --autofit options. It's also possible to switch
between fullscreen/non-fullscreen playback during runtime.
It can be "dangerous". In particular, the decoder might have failed to
initialize, and is now in a broken state. avcodec_flush_buffers() is not
expected to be called in this state, and could trigger undefined
behavior.
Avoids "problems". In particular, it makes MMAL output a NOPTS timestamp
if the input timestamp was NOPTS.
Don't do it for other decoders. Ideally, we will at some point in the
future switch to integer fractions for timestamps at least up until the
filter layer. But this would be a larger change, and for now I'd prefer
keeping the not-rounded demuxer timestamps (if we have them).
This is the "unicode" version of it. It appears Firefox uses it now?
I'm not sure if we still need to support the old variant, but hopefully
not.
Fixes#2782.
Will be helpful for the coming filter support. I planned on merging
audio/video decoding, but this will have to wait a bit longer, so only
remove the duplicate status codes.
Since there can be multiple backends for a single API (vaapi can use GLX
or EGL), not logging the exact backend name is annoying. So add it. At
the same time, there is no need to duplicate the name as used by the
--hwdec options, so replace it with using the numeric hwdec API ID.
Commit b53cb8de added a delay queue for decoded frames. This is supposed
to be used with copy-back decoders like dxva2-copy and vaapi-copy.
Surfaces returned by them can't be referenced after uninitializing the
decoders, so they have to be released before destroying the decoder.
Move the flush_all() call above decoder uninit accordingly. Also move
the destruction of the AVFrame used for decoding (just for being
defensive - normally it doesn't hold any reference).
We just need to provide an entrypoint for it, and move the main init
code to a separate function. This gets rid of the messy video chain full
reinit in command.c, which completely destroyed and recreated the video
state for the purpose of mid-stream hw/sw switching.
Don't give the "software_fallback_decoder" field special meaning. Alwass
set it, and rename it to "decoder". Whether hw decoding is used is
determined by the "hwdec" field already.
This codes tries to deal with broken PTS timestamps, but since commit
271cabe6 it didn't always overwrite the previous timestamp as it should
have. This mattered only if there were broken timestamps in the video
stream.
Also remove the pointless prev_codec_pts variables, since the decoder
doesn't overwrite these fields anymore.