This was _always_ called, even if the resampling was static, or the
filter was inserted for format conversion only. This should have been
fine, as I expected the function not to enable resampling when the
compensation is unset, and the source/target rates are the same. But
this is not the case, and it always enables resampling.
So explicitly avoid the call. If we have already called it successfully,
it's better not do avoid it (to overwrite the previous compensation
value), but it will also be cheap/no-op then.
Probably fixes#4716.
Commit 0e0b87b6f3 fixed that dropped packets did not trigger further
work correctly. But it also made trivial --lavfi-complex freeze. The
reason is that the meaning if DATA_AGAIN was overloaded: the decoders
meant that they should be called again, while lavfi.c meant that other
outputs needed to be checked again. Rename the latter meaning to
DATA_STARVE, which means that the current input will deliver no more
data, until "other" work has been done (like reading other outputs, or
feeding input).
The decoders never return DATA_STARVE, because they don't get input from
the player core (instead, they get it from the demuxer directly, which
is why they still can return DATA_WAIT).
Also document the DATA_* semantics in the enum.
Fixes#4746.
Retrieve the depth for each component and internal texture format
separately. Only for 8 bit per component textures we assume that all
bits are used (or else we would in my opinion create too many probe
textures).
Assuming 8 bit components are always correct also fixes operation in
GLES3, where we assumed that each component had -1 bits depth, and this
all UNORM formats were considered unusable. On GLES, the function to
check the real bit depth is not available. Since GLES has no 16 bit
UNORM textures at all, except with the MPGL_CAP_EXT16 extension, just
drop the special condition for it. (Of course GLES still manages to
introduce a funny special case by allowing GL_LUMINANCE , but not
defining GL_TEXTURE_LUMINANCE_SIZE.)
Should fix#4749.
Runtime untested, because I get this:
[vo/rpi] Could not get DISPMANX objects.
This happened even when building older git versions, and on a RPI image
that hasn't changed in the recent years. I don't know how to make this
POS work again, so I guess if there's a bug in the new code, it will
remain broken.
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
Apparently this was broken by the "ctx->hwdec" check in the if condition
guarding the destroy call, and "ctx->hwdec = NULL;" was moved up
earlier, making this always dead code.
This should probably be refcounted or so, although that could make it
worse as well. For now, add a flag whether the device should be
destroyed.
Fixes#4735.
Less flexible than GL_TIMESTAMP but supported by more platforms. This
will mean that nested queries have to be detected and silently omitted,
but oh well. Not much use for them anyway.
Fixes#4721.
This reverts commit 0ce3dce03a.
We actually explicitly add read-only frames in some of the hwaccel code
via mp_image_new_custom_ref(), which sets AV_BUFFER_FLAG_READONLY. It's
probably better to keep it this way.
Fixes#4652 (and some related issues with D3D).
This is really obnoxious. --include parses into the default profile, but
when used on the command line, it did never get applied. So we have to
apply it when the exact conditions for this are met.
Fixes#4673.
If --demuxer-mkv-probe-start-time=no is used, and a seek is triggered on
start, then cluster_start will be 0, and the packet reading code will
print an error message about not finding valid data. This fixes itself
since it invokes the resync code, but it's still pretty ugly. Avoid this
by always initializing cluster_start.
Just the audio resync code in its normal state: buggy. This time,
AD_NO_PROGRESS was handled about the same as AD_WAIT. But it means the
decoder didn't output data, even though input is still readily
available.
This happened in particular when the timeline code was used (potentially
skipping many packets), and thus should fix#4688.
It's an ancient X11 protocol extension that apparently nobody uses
anymore (desktop environments in particular have replaced it with
equally bad protocols that require tons of dependencies). Users keep
complaining about it being a required dependency.
The impact is likely minimal to none.
Fixes#4706 and other annoying people.
Fixes#4626. Previously removed because the original smi entry was added
by someone who did not agree to LGPL relicensing. I'm not sure if the
original change was copyrightable, but this commit for sure does not
fall under that author's copyright.
Any bad HRESULTs should have been printed already and lots of failure modes
don't have an HRESULT leading to awkward hr = E_FAIL business.
This also checks the exit status of GetBufferSize in the align hack. A final
fatal message is added if either of the retry hacks fail.
Generic description of pixel formats is hard. In this case, the Apple
special format for packed YUV could have been interpreted as a RGB
format with funny packing.
Move multiple GL-specific things from the renderer to other places like
vo_opengl.c, vo_opengl_cb.c, and ra_gl.c.
The vp_w/vp_h parameters to gl_video_resize() make no sense anymore, and
are implicitly part of struct fbodst.
Checking the main framebuffer depth is moved to vo_opengl.c. For
vo_opengl_cb.c it always assumes 8. The API user now has to override
this manually. The previous heuristic didn't make much sense anyway.
The only remaining dependency on GL is the hwdec stuff, which is harder
to change.
Completely unnecessary, we can just update the uniforms immediately
after creating the program. In theory, for GLSL 4.20+, we could even
skip this, but oh well.
The vp_w/vp_h variables and parameters were not really used anymore
(they were redundant with ra_tex w/h) - but vp_h was still used to
identify whether rendering should be done mirrored.
Simplify this by adding a fbodst struct (some bad naming), which
contains the render target texture, and some parameters how it should be
rendered to (for now only flipping). It would not be appropriate to make
this a member of ra_tex, so it's a separate struct.
Introduces a weird regression for the first frame rendered after
interpolation is toggled at runtime, but seems to work otherwise. This
is possibly due to the change that blit() now mirrors, instead of just
copying. (This is also why ra_fns.blit is changed.)
Fixes#4719.
When using dumb mode, we can actually redraw a frame without uploading
it. Marking this as fresh as well results in unpredictable pass
behavior, which is confusing and makes debugging harder. So mark it as a
redraw instead, in that case.