Manual changes done:
* Merged the interface-changes under the already master'd changes.
* Moved the hwdec-related option changes to video/decode/vd_lavc.c.
this lead to an unexpected videotoolbox-copy hwdec name due to the last
two chars being cut off. since selection is also done by that name one
had to use "videotoolbox-co" to explicitly use the copy mode of
videotoolbox.
The default get_format does exactly do this, so we don't need to
duplicate it.
The only potential problem with this is that the logic doesn't entirely
prevent that the avcodec_default_get_format hw_device_ctx path is
triggered, which would probably work, but has unknown consequences and
interactions. But the way the logic currently works it can't happen,
provided the hwaccel metadata libavcodec provides is correct.
The --hwdec* options are a good fit for the vd_lavc local option
struct. This annoyingly requires manual prefixing of most of these
options with --vd-lavc (could be avoided by using more sub-struct
craziness, but let's not).
The default get_format does exactly do this, so we don't need to
duplicate it.
The only potential problem with this is that the logic doesn't entirely
prevent that the avcodec_default_get_format hw_device_ctx path is
triggered, which would probably work, but has unknown consequences and
interactions. But the way the logic currently works it can't happen,
provided the hwaccel metadata libavcodec provides is correct.
This includes codec/muxer/demuxer iteration (different iteration
function, registration functions deprecated), and the renaming of
AVFormatContext.filename to url (plus making it a malloced string).
Libav doesn't have the new API yet, so it will break. I hope they will
add the new APIs too.
Also a regression of the filter change. The new code is more picky about
EOF states, and it turns out the weird delay queue (used with some hwdec
copy back modes only) accidentally dropped an EOF event. It reset the
avctx before the delay queue was drained, which meant it never returned
the expected AVERROR_EOF status code.
Also don't signal EOF when copy back fails. It should just try to
continue until fallback is performed.
This is a dataflow issue caused by the filters change. When the fallback
happens, vd_lavc does not return a frame, but also does not accept a new
packet, which confuses lavc_process(). Fix this by immediately retrying
to feed the buffered packet and decode a frame on fallback.
Fixes#5489.
Move dec_video.c to filters/f_decoder_wrapper.c. It essentially becomes
a source filter. vd.h mostly disappears, because mp_filter takes care of
the dataflow, but its remains are in struct mp_decoder_fns.
One goal is to simplify dataflow by letting the filter framework handle
it (or more accurately, using its conventions). One result is that the
decode calls disappear from video.c, because we simply connect the
decoder wrapper and the filter chain with mp_pin_connect().
Another goal is to eventually remove the code duplication between the
audio and video paths for this. This commit prepares for this by trying
to make f_decoder_wrapper.c extensible, so it can be used for audio as
well later.
Decoder framedropping changes a bit. It doesn't seem to be worse than
before, and it's an obscure feature, so I'm content with its new state.
Some special code that was apparently meant to avoid dropping too many
frames in a row is removed, though.
I'm not sure how the source code tree should be organized. For one,
video/decode/vd_lavc.c is the only file in its directory, which is a bit
annoying.
I found that at least for mjpeg streams, FFmpeg will set packet pts/dts
anyway. The mjpeg raw video demuxer (along with some other raw formats)
has a "framerate" demuxer option which defaults to 25, so all mjpeg
streams will be played at 25 FPS by default.
mpv doesn't like this much. If AVFMT_NOTIMESTAMPS is set, it prints a
warning, that might print a bogus FPS value for the assumed framerate.
The code was originally written with the assumption that FFmpeg would
not set pts/dts for such formats, but since it does, the printed
estimated framerate will never be used. --fps will also not be used by
default in this situation.
To make this hopefully less confusing, explicitly state the situation
when the AVFMT_NOTIMESTAMPS flag is set, and give instructions how to
work it around.
Also, remove the warning in dec_video.c. We don't know what FPS it's
going to assume anyway. If there are really no timestamps in the stream,
it will trigger our normal missing pts workaround. Add the assumed FPS
there.
In theory, we could just clear packet timestamps if AVFMT_NOTIMESTAMPS
is set, and make up our own timestamps. That is non-trivial for advanced
video codecs like h264, so I'm not going there. For seeking and
buffering estimation the situation thus remains half-broken.
This is a mitigation for #5419.
Remove the max_count creation parameter, because it's pointless and
rarely ever did anything. Add a talloc parent parameter instead (which
is something completely different, but convenient, and all callers needs
to be changed anyway).
Instead of clearing the pool when the now removed maximum is reached,
clear it on image parameter changes instead.
If feed_packet() ended with DATA_WAIT, the player should have gone to
sleep, until the demuxer wakes it up again when there is new data. But
the call to read_frame() unconditionally overwrote this status code, so
it never waited. The consequence was that the core burned CPU by
effectively polling the demuxer status, which was noticeable especially
when seeking in network streams (since seeking is async, decoders will
start out with having to wait for network).
Regression since commit 33e5755c.
The old code tried to make sure at all times to try to read a new
packet. Only once that was read, it tried to retrieve new video or audio
frames the decoder might already have decoded.
Change this to strictly read frames from the decoder until it signals
that it wants a new packet, and only then read and feed a new packet.
This is in theory nicer, follows the libavcodec recommended data flow,
and and reduces the minimum latency by 1 frame.
This merely requires switching the order in which those calls are done.
Normally, the decoder will return only 1 frame until a new packet is
required. If we would just feed it 1 packet, return DATA_AGAIN, and wait
until the next frame is decoded, we would run the playloop 1 time too
often for no reason (which is fine but might have some overhead). To
avoid this, try to read a frame again after possibly feeding a packet.
For this reason, move the feed/read code to its own functions each,
instead of merely moving the code.
The audio and video code for this particular thing is basically
duplicated. The idea is to unify them one day, so make the change to
both. (Doing this for video is the real motivation for this change, see
below.)
The video code change is slightly more complicated, because we have to
care about the framedrop counting (which is just a heuristic, but for
now considered better than nothing, and possibly considered required to
warn the user of framedrops happening - maybe).
Apparently this change helps with stalling streams on Android with the
mediacodec wrapper and mpeg2 decoder implementations which deinterlace on
decoding (and return 2 frames per packet).
Based on an idea and observations by tmm1.
This fixes when resuming certain broken h264 files encoded by x264. See
FFmpeg commit 840b41b2a643fc8f0617c0370125a19c02c6b586 about the x264
bug itself.
Normally, the unregistered user data SEI (that contains the x264 version
string) is informational only. But libavcodec uses it to workaround a
x264 bug, which was recently fixed in both libavcodec and x264. The fact
that both encoder and decoder were buggy is the reason that it was not
found earlier, and there are apparently a lot of files around created by
the broken decoder. If libavcodec sees the SEI, this bug can be worked
around by using the old behavior.
If you resume a file with mpv (i.e. seeking when the file loads),
libavcodec never sees the first video packet. Consequently it has to
assume the file is not broken, and never applies the workaround,
resulting in garbage being played.
Fix this by always feeding the first video packet to the decoder on
init, and then flushing the codec (to avoid that an unwanted image is
output). Flushing the codec does not remove info such as the x264
version. We also abuse the fact that the first avcodec_send_packet()
always pushes the frame into the decoder (so we don't have to trigger
the decoder by requsting an output frame).
Technically, the user could just use --vd-lavc-o with the same result.
But I find it better to make this an explicit option, so we can document
the ups and downs, and also avoid setting it for non-h264.
A release has been made, so drop options deprecated for that release.
Also drop some options which have been deprecated a much longer time
before.
Also fix a typo in client-api-changes.rst.
Libav has been broken due to the hwdec changes. This was always a
temporary situation (depended on pending patches to be merged), although
it took a bit longer. This also restores the travis config.
One code change is needed in vd_lavc.c, because it checks the AV_PIX_FMT
for videotoolbox (as opposed to the mpv format identifier), which is not
available in Libav. Add an ifdef; the affected code is for a deprecated
option anyway.
I've decided that MP_TRACE means “noisy spam per frame”, whereas
MP_DBG just means “more verbose debugging messages than MSGL_V”.
Basically, MSGL_DBG shouldn't create spam per frame like it currently
does, and MSGL_V should make sense to the end-user and provide mostly
additional informational output.
MP_DBG is basically what I want to make the new default for --log-file,
so the cut-off point for MP_DBG is if we probably want to know if for
debugging purposes but the user most likely doesn't care about on the
terminal.
Also, the debug callbacks for libass and ffmpeg got bumped in their
verbosity levels slightly, because being external components they're a
bit less relevant to mpv debugging, and a bit too over-eager in what
they consider to be relevant information.
I exclusively used the "try it on my machine and remove messages from
MSGL_* until it does what I want it to" approach of refactoring, so
YMMV.
Annoying exception that makes no sense to keep. Normally, users or
client applications will either use --hwdec=auto, or not set the option
at all, which both leads to the expected result.
For METHOD_INTERNAL hwdecs (non-copy cases), make sure the VO interops
are always loaded, because those decoders will output hardware pixel
formats, which will need special support in vo_gpu. Otherwise,
initialization will fail, complaining that it can't convert the output
format to something the VO supports.
If the codec uses AV_CODEC_HW_CONFIG_METHOD_INTERNAL, and we're using
the -copy method, then don't request the native pix_fmt. It might not
have a AVFrame.hw_frames_ctx set, and we couldn't read back at all. On
top of that, most of those decoders probably don't provide read-back
when using such opaque formats anyway, while providing separate decoding
modes to decode to RAM.
This code is for trying to avoid using an emulation layer when using
auto probing, so that we end up using the actual API the drivers
provide. It was destroyed in the recent refactor.
Otherwise, if e.g. "nvdec" didn't work, but "nvdec-copy" did, it would
never try "vdpau", which is actually the next non-copy mode on the
autprobe list. It's really expected that it selects "vdpau". Fix this by
sorting the -copy modes to the end of the final hwdec list.
But we still don't want preferred -copy modes like "nvdec-copy" to be
sorted after fragile non-preferred modes like "cuda", and --hwdec=auto
should prefer "nvdec-copy" over it, so make sure the copying mode does
not get precedence over preferred vs. non-preferred mode.
Also simplify the existing auto_pos sorting condition, and fix the
fallback sort order (although that doesn't matter too much).
Change it from explicit metadata about every hwaccel method to trying to
get it from libavcodec. As shown by add_all_hwdec_methods(), this is a
quite bumpy road, and a bit worse than expected.
This will probably cause a bunch of regressions. In particular I didn't
check all the strange decoder wrappers, which all cause some sort of
special cases each. You're volunteering for beta testing by using this
commit.
One interesting thing is that we completely get rid of mp_hwdec_ctx in
vd_lavc.c, and that HWDEC_* mostly goes away (some filters still use it,
and the VO hwdec interops still have a lot of code to set it up, so it's
not going away completely for now).
The libavcodec mediacodec support does not conform to the new hwaccel
APIs yet. It has been agreed uppon that this glue code can be deleted
for now, and support for it will be restored at a later point.
Readding would require that it supports the AVCodecContext.hw_device_ctx
API. The hw_device_ctx would then contain the surface ID.
vo_mediacodec_embed would actually perform the task of creating
vo.hwdec_devs and adding a mp_hwdec_ctx, whose av_device_ref is a
AVHWDeviceContext containing the android surface.
It makes more sense to have it in the general video directory (along
with vdpau.c and vaapi.c), since the decoder source files don't even
access it anymore.
Like with all hwaccels, there's little that is actually specific to
decoding (which has been moved away anyway), and what is left are
declarations (which will also go away soon).
Lots of shit code for nothing. We probably could just use libavutil's
code for all of this. But for now go with this, since it tends to
prevent stupid terminal messages during probing (libavutil has no
mechanism to selectively suppress errors specifically during probing).
Ignores the "emulated" API flag (for avoiding vaapi/vdpau wrappers), but
it doesn't matter that much for -copy anyway.
The idea is to get rid of vd_lavc_hwdec, so special functionality like
this has to go somewhere else. At this point, hwframes_refine is only
needed for d3d11, and it doesn't do much, so for now the new callback
has no context. In can be made more fancy if really needed.
Make the VO<->decoder interface capable of supporting multiple hwdec
APIs at once. The main gain is that this simplifies autoprobing a lot.
Before this change, it could happen that the VO loaded the "wrong" hwdec
API, and the decoder was stuck with the choice (breaking hw decoding).
With the change applied, the VO simply loads all available APIs, so
autoprobing trickery is left entirely to the decoder.
In the past, we were quite careful about not accidentally loading the
wrong interop drivers. This was in part to make sure autoprobing works,
but also because libva had this obnoxious bug of dumping garbage to
stderr when using the API. libva was fixed, so this is not a problem
anymore.
The --opengl-hwdec-interop option is changed in various ways (again...),
and renamed to --gpu-hwdec-interop. It does not have much use anymore,
other than debugging. It's notable that the order in the hwdec interop
array ra_hwdec_drivers[] still matters if multiple drivers support the
same image formats, so the option can explicitly force one, if that
should ever be necessary, or more likely, for debugging. One example are
the ra_hwdec_d3d11egl and ra_hwdec_d3d11eglrgb drivers, which both
support d3d11 input.
vo_gpu now always loads the interop lazily by default, but when it does,
it loads them all. vo_opengl_cb now always loads them when the GL
context handle is initialized. I don't expect that this causes any
problems.
It's now possible to do things like changing between vdpau and nvdec
decoding at runtime.
This is also preparation for cleaning up vd_lavc.c hwdec autoprobing.
It's another reason why hwdec_devices_request_all() does not take a
hwdec type anymore.
nvdec aka cuvid aka cuda should work much better than vdpau, and support
newer codecs (such as vp9), and more advanced surface formats (like 10
bit).
This requires moving the d3d hwaccels in the autoprobe order, since on
Windows, d3d decoding should be preferred over nvidia proprietary stuff.
Users of older drivers will need to force --hwdec=vdpau, since it could
happen that the vo_gpu cuda hwdec interop loads (so the vdpau interop is
not loaded), but the hwdec itself doesn't work.
I expect this does not break AMD (which still needs vdpau for vo_gpu
interop, until libva is fixed so it can fully support AMD).
All this code used to be required by the old variants of the libavcodec
hw decoding APIs. Almost all of that is gone, although the mediacodec
API unfortunately still pulls in some old stuff (but not all of it).
(mediacodec build/functionality is untested, but should work.)
All of this was dead code and completely unused.
get_buffer2_hwdec() is the biggest chunk. One unfortunate thing about it
is that, while it was active, it could perform a software fallback much
faster, because it didn't have to wait until a full frame is decoded (it
actually decoded a full frame, but the current code has to decode many
more frames due to the codec delay, because the current code waits until
the API returns a decoded frame.) We should probably restore the latter,
although since it's an optional optimization, and the current behavior
doesn't change with the removal of this code, don't actually do anything
about it.
This is where it should be. It only wasn't because of an old libavcodec
bug, that returned the side data only on every IDR. This required some
sort of caching, which is now dropped. (mp_image wouldn't have been able
to do this kind of caching, because this code is stateless.) We don't
support these old libavcodec versions anymore, which is why this is not
needed anymore.
Also move initialization of rotation/stereo stuff to dec_video.c.
This simply didn't work. Unlike cuda-copy, this is a true hwaccel, and
obviously we need to provide it a device.
Implement this in a relatively generic way, which can probably reused
directly by videotoolbox (not doing this yet because it would require
testing on OSX).
Like with cuda-copy, --cuda-decode-device is ignored. We might be able
to provide a more general way to select devices at some later point.
This is just a dumb consequence of HWDEC_ types somehow being part of
both decoder and VO. Obviously, the VO should only care about supporting
specific hardware surface types or providing specific device types, but
until they are separated, stupid unintuitive mismatches will occur.
See manpage additions.
(In ffmpeg-mpv and Libav, this is still called "cuvid". Libav won't work
yet, because it has no frame params support yet, but this could get
fixed soon.)
This removes the need for codec- and API-specific knowledge in the
libavcodec hardware acceleration API user. For mpv, this removes the
need for vd_lavc_hwdec.pixfmt_map and a few other things. (For now, we
still keep the "old" parts for the sake of supporting older Libav, and
FFgarbage.)
Should speed up seeks.
(Unfortunately it's useless for backstepping. Backstepping is like
precise seeking, except we're unable to drop frames, as we can't know
the previous frame if we drop it.)
The new_segment field was used to track the decoder data flow handler of
timeline boundaries, which are used for ordered chapters etc. (anything
that sets demuxer_desc.load_timeline). This broke seeking with the
demuxer cache enabled. The demuxer is expected to set the new_segment
field after every seek or segment boundary switch, so the cached packets
basically contained incorrect values for this, and the decoders were not
initialized correctly.
Fix this by getting rid of the flag completely. Let the decoders instead
compare the segment information by content, which is hopefully enough.
(In theory, two segments with same information could perhaps appear in
broken-ish corner cases, or in an attempt to simulate looping, and such.
I preferred the simple solution over others, such as generating unique
and stable segment IDs.)
We still add a "segmented" field to make it explicit whether segments
are used, instead of doing something silly like testing arbitrary other
segment fields for validity.
Cached seeking with timeline stuff is still slightly broken even with
this commit: the seek logic is not aware of the overlap that segments
can have, and the timestamp clamping that needs to be performed in
theory to account for the fact that a packet might contain a frame that
is always clipped off by segment handling. This can be fixed later.
This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced
format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor
struct.
That struct holds dmabuf fds and information allowing zerocopy rendering
using KMS / DRM Atomic.
This has been tested on RockChip ROCK64 device.
The mechanism introduced in b135af6842 assumed AVHWFramesContext would
be enough. Apparently it's not - the intended use with Rockchip (not
Rokchip btw.) requires accessing actual frame data in order to access
the AVDRMFrameDescriptor struct.
Just pass the entire mp_image to the new function. This is more
flexible, although it slightly worries me that it will be less reusable
for things which require setting up mp_image_params before any real
frames are processed (such as filters).
The same should happen with any other side data that matters to mpv,
otherwise filters will drop it.
(No, don't try to argue that mpv should use AVFrame. That won't work.)
ffmpeg_garbage() is copy&paste from frame_new_side_data() in FFmpeg
(roughly feed201849b8f91), because it's not public API. The name
reflects my opinion about FFmpeg's API.
In mp_image_to_av_frame(), change the too-fragile
*new_ref = (struct mp_image){0};
into explicitly zeroing out the fields that are "transferred" to the
created AVFrame.
It seems this will be useful for Rokchip DRM hwcontext integration.
DRM hwcontexts have additional internal structure which can be different
depending on the decoder, and which is not part of the generic hwcontext
API. Rockchip has 1 layer, which EGL interop happens to translate to a
RGB texture, while VAAPI (mapped as DRM hwcontext) will use multiple
layers. Both will use sw_format=nv12, and thus are indistinguishable on
the mp_image_params level. But this is needed to initialize the EGL
mapping and the vo_gpu video renderer correctly.
We hope that the layer count is enough to tell whether EGL will
translate the data to a RGB texture (vs. 2 texture resembling raw nv12
data). For that we introduce MP_IMAGE_HW_FLAG_OPAQUE.
This commit adds the flag, infrastructure to set it, and an "example"
for D3D11.
The D3D11 addition is quite useless at this point. But later we want to
get rid of d3d11_update_image_attribs() anyway, while we still need a
way to force d3d11vpp filter insertion, so maybe it has some
justification (who knows). In any case it makes testing this easier.
Obviously it also adds some basic support for triggering the opaque
format for decoding, which will use a driver-specific format, but which
is not supported in shaders. The opaque flag is not used to determine
whether d3d11vpp needs to be inserted, though.
Mostly an obscure option for testing. But --videotoolbox-format can be
deprecated, as it becomes redundant.
We rely on the libavutil hwcontext implementation to reject invalid
pixfmts, or not to blow up if they are incompatible.
iive agreed to relicense things that are still in mpv to LGPLv2.1. So
change the licenses of the affected files, and rename the configure
switch for LGPL mode to --enable-preliminary-lgpl2.
(The "preliminary" part will probably be removed from the configure
switch soon as well.)
Also player/main.c hasn't had GPL parts since a few commits ago.
Now you need FFmpeg git, or something.
This also gets rid of the last real use of gpu_memcpy(). libavutil does
that itself. (vaapi.c still used it, but it was essentially unused,
because the code path isn't really in use anymore. It wasn't even
included due to the d3d-hwaccel dependency in wscript.)
See "Copyright" file for caveats.
This changes the remaining "almost LGPL" files to LGPL, because we think
that the conditions the author set for these was finally fulfilled.
This reverts commit 96462040ec.
I guess the autoprobing is still too primitive to handle this well. What
it really should be trying is initializing the wrapper decoder, and if
that doesn't work, try another method. This is complicated by hwaccels
initializing in a delayed way, so there is no easy solution yet.
Probably fixes#4865.
Not resetting hwdec_request_reinit caused it to flush on every packet,
which not only caused it to fail triggering the actual fallback, and let
it never decode a new frame, but also to get stuck on EOF.
This adds handling of spherical video metadata: retrieving it from
demux_lavf and demux_mkv, passing it through filters, and adjusting it
with vf_format. This does not include support for rendering this type of
video.
We don't expect we need/want to support the other projection types like
cube maps, so we don't include that for now. They can be added later as
needed.
Also raise the maximum sizes of stringified image params, since they
can get really long.
Apparently this was broken by the "ctx->hwdec" check in the if condition
guarding the destroy call, and "ctx->hwdec = NULL;" was moved up
earlier, making this always dead code.
This should probably be refcounted or so, although that could make it
worse as well. For now, add a flag whether the device should be
destroyed.
Fixes#4735.
Since these need to be refcounted, we throw them directly into struct
mp_image instead of being part of mp_colorspace. Even though they would
semantically make more sense in mp_colorspace, having them there is
really awkward because mp_colorspace is passed around and stored a lot,
and this way their lifetime is exactly tied to the lifetime of the
mp_image associated with it.
Can be enabled via --vd-lavc-dr=yes. See manpage additions for what it
does.
This reminds of the MPlayer -dr flag, but the implementation is
completely different. It's the same basic concept: letting the decoder
render into a GPU buffer to avoid a copy. Unlike MPlayer, this doesn't
try to go through filters (libavfilter doesn't support this anyway).
Unless a filter can work in-place, DR will be silently disabled. MPlayer
had very complex semantics about buffer types and management (which
apparently nobody ever understood) and weird restrictions that mostly
limited it to mpeg2 style codecs. The mpv code does not do any of this,
and just lets the decoder allocate an arbitrary number of untyped
images. (No MPlayer code was used.)
Parts of the code based on work by atomnuker (starting point for the
generic code) and haasn (some GL definitions, some basic PBO code, and
correct fencing).
Remove this code because it could be argued that it contains GPL-only
code (see commit 642e963c86 for details).
The remaining aspect methods appear to work just as well, are
potentially more compatible to other players, and the code becomes much
simpler.
Commit d5702d3b95 messed up the order of destruction of the elements: it
destroyed the avctx before the hwaccel uninit, even though the hwaccel
uninit could access avctx. This could happen with some old hwaccels
only, such as D3D ones before the new libavcodec hwaccel API.
Fix this by making use of the fact that avcodec_flush_buffers() will
uninit the underlying hwaccel. Thus we can be sure avctx is not using
any hwaccel objects anymore, and it's safe to uninit the hwaccel.
Move the hwdec_dev dstruction code with it, because otherwise we would
in theory potentially create some dangling pointers in avctx.
This sets AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH, which some hwaccels
using the new generic API respect. These do profile selection in
libavcodec, so it can be controlled only with an external flag, instead
of in mpv code like it used to be done.
They have been deprecated for a decade, yet you're forced to explicitly
deal with them at every step, or they will break your shit.
FFmpeg insists on keeping them, because libavfilter is too stupid to
deal with color ranges properly. Ridiculous.
For some braindead reason, Microsoft decided to prevent you from
dynamically loading system libraries. This makes portability harder.
And we're talking about portability between Microsoft OSes!
This partially reverts the change from a longer time ago to always build
DXVA2 and D3D11VA together.
To make it simpler, we change the following:
- building with ANGLE headers is now required to build D3D hwaccels
- if DXVA2 is enabled, D3D11VA is still forcibly built
- the CLI vo_opengl ANGLE backend is now under --egl-angle-win32
This is done to reduce the dependency mess slightly.