Completely pointless abominations that FFmpeg refuses to remove. They
are ancient, long deprecated API which we can't use anymore. They
confused users as well.
Pretend that they don't exist. Due to the way --vd works, they can't
even be forced anymore. The older hack which explicitly rejects these
can be dropped as well.
This is in preparation for a hypothetical API change in libavcodec,
which would allow the decoder to return multiple video frames before
accepting a new input packet.
In theory, the body of the if() added to vd_lavc.c could be replaced
with this code:
packet->buffer += ret;
packet->len -= ret;
but currently this is not needed, as libavformat already outputs one
frame per packet. Also, using libavcodec this way could lead to a
"deadlock" if the decoder refuses to consume e.g. garbage padding, so
enabling this now would introduce bugs.
(Adding this now for easier testing, and for symmetry with the audio
code.)
Until now (and in mplayer traditionally), avi timestamps were handled
with a timestamp FIFO. AVI timestamps are essentially just strictly
increasing frame numbers and are not reordered like normal timestamps.
Limiting the FIFO is required because frames can be dropped. To make
it worse, frame dropping can't be distinguished from the decoder not
returning output due to increasing the buffering required for B-frames.
("Measuring" the buffering at playback start seems like an interesting
idea, but won't work as the buffering could be increased mid-playback.)
Another problem are skipped frames (packets with data, but which do
not contain a video frame).
Besides dropped and skipped frames, there is the problem that we can't
always know the delay. External decoders like MMAL are not going to
tell us. (And later perhaps others, like direct VideoToolbox usage.)
In general, this works not-well enough that I prefer the solution of
passing through AVI timestamps as DTS. This is slightly incorrect,
because most decoders treat DTS as mpeg-style timestamps, which
already include a b-frame delay, and thus will be shifted by a few
frames. This means there will be a problem with A/V sync in some
situations.
Note that the FFmpeg AVI demuxer shifts timestamps by an additional
amount (which increases after the first seek!?!?), which makes the
situation worse. It works well with VfW-muxed Matroska files, though.
On RPI, the first X timestamps are broken until the MMAL decoder "locks
on".
fd339e3f53 introduced a regression that caused
segfault while uniniting dxva2 decoder (and possibly vdpau too). The problem was
that it freed the avctx earlier, before calling the backend-specific uninit
which referenced it.
Revert some of the changes of that commit, and avoid calling flush by
checking whether the codec is open instead.
(Based on a PR by Kevin Mitchell.)
Signed-off-by: wm4 <wm4@nowhere>
It can be "dangerous". In particular, the decoder might have failed to
initialize, and is now in a broken state. avcodec_flush_buffers() is not
expected to be called in this state, and could trigger undefined
behavior.
Avoids "problems". In particular, it makes MMAL output a NOPTS timestamp
if the input timestamp was NOPTS.
Don't do it for other decoders. Ideally, we will at some point in the
future switch to integer fractions for timestamps at least up until the
filter layer. But this would be a larger change, and for now I'd prefer
keeping the not-rounded demuxer timestamps (if we have them).
Commit b53cb8de added a delay queue for decoded frames. This is supposed
to be used with copy-back decoders like dxva2-copy and vaapi-copy.
Surfaces returned by them can't be referenced after uninitializing the
decoders, so they have to be released before destroying the decoder.
Move the flush_all() call above decoder uninit accordingly. Also move
the destruction of the AVFrame used for decoding (just for being
defensive - normally it doesn't hold any reference).
We just need to provide an entrypoint for it, and move the main init
code to a separate function. This gets rid of the messy video chain full
reinit in command.c, which completely destroyed and recreated the video
state for the purpose of mid-stream hw/sw switching.
Don't give the "software_fallback_decoder" field special meaning. Alwass
set it, and rename it to "decoder". Whether hw decoding is used is
determined by the "hwdec" field already.
This is mainly a refactor. I'm hoping it will make some things easier
in the future due to cleanly separating codec metadata and stream
metadata.
Also, declare that the "codec" field can not be NULL anymore. demux.c
will set it to "" if it's NULL when added. This gets rid of a corner
case everything had to handle, but which rarely happened.
MPlayer traditionally always used the display aspect ratio, e.g. 16:9,
while FFmpeg uses the sample (aka pixel) aspect ratio.
Both have a bunch of advantages and disadvantages. Actually, it seems
using sample aspect ratio is generally nicer. The main reason for the
change is making mpv closer to how FFmpeg works in order to make life
easier. It's also nice that everything uses integer fractions instead
of floats now (except --video-aspect option/property).
Note that there is at least 1 user-visible change: vf_dsize now does
not set the display size, only the display aspect ratio. This is
because the image_params d_w/d_h fields did not just set the display
aspect, but also the size (except in encoding mode).
If reinit after a fallback from hardware fails, this field can be NULL.
The check in control() was broken due to a typo (found by Coverity), and
decode() lacked the check entirely.
Approximately reverts commit 3ccac74d. This failed with some avi files,
which do pseudo-VFR by sending packets with empty frames (or repeat
frames, depending on point of view). Specifically, these packets are not
0 bytes, so they don't get skipped by libavformat, as with the usual VFR
avi hack. Instead, the packet contains a VOP with vop_coded=0, so
libavcodec will just return no frame. We could probably distinguish such
skipped frames and delayed frames by explicitly measuring the codec
delay by counting how long it takes to get the very first frame (and
then treat skips as explicit drops), but we may as well simply reinstate
the old code.
To appease to at least one semi-broken case, do not enable this logic on
the RPI, as the FFmpeg MMAL wrapper has arbitrary buffering (and MMAL
itself is asynchronous).
Until now, we've relied on the following things:
- you can send flush packets to the decoder even if it's fully flushed,
- you can send new packets to a flushed decoder,
- you can send new packers to a partially flushed decoder.
("flushing" refers to sending flush packets to the decoder until the
decoder does not return new pictures, not avcodec_flush_buffers().)
All of these are questionable. The libavcodec API probably doesn't
guarantee that these work well or at all, even though most decoders have
no issue with these. But especially with hardware decoding wrappers
(like MMAL), real problems can be expected. Isolate us from these corner
cases by handling them explicitly.
A hw decoder might fail to decode a frame for multiple reasons, and not
always just because decoding is impossible. We can't generally
distinguish these reasons well. Make it more tolerant by accepting
failures of 3 frames, but not more. The threshold can be adjusted by the
repurposed --vd-lavc-software-fallback option.
(This behavior was suggested much earlier in some PR, but at the time
the "proper" hwdec fallback was indistinguishable from decoding error.
With the current situation, "proper" fallback is still instantious.)
The uninit() function was called twice if the uninit() function failed
(once by init(), once by vd_lavc.c code), which caused crashes due to
double-free. (This failure is a corner case, and all other hwdec
backends appear to handle this case gracefully.)
I do not think this code should be able to deal with uninit() being
called other than once. Guarantee that it's called exactly once.
Fixes linker failure. How did this ever work? Apparently it did most of
the time, but apparently we just got the first case where it didn't.
Fixes#2433.
The previous commit moved the av_frame_unref() after the got_picture
check. This accidentally also deferred the software fallback
reinitialization to until a software picture was decoded (instead of the
exact time of the fallback), which is not ideal.
Just rely on the fact that calling av_frame_unref() on a frame is ok
even if nothing was decoded.
Commit 12cd48a8 started setting the hwdec_failed field even if hwdec was
not active, and because it also checked this field even if hwdec was not
active, broke decoding forever.
Fix this, and also avoid a memory leak or API misuse by releasing the
decoded picture. Passing an unreleased frame to the decoder has as far
as I know no defined effects.
The libavcodec h264 decoder contains some idiotic code with unknown
purpose (no sample or explanation known that necessitates its
existence), that causes the AVCodecContext.get_format callback to be
invoked at a time when hwaccels can't be initialized. By definition, the
get_format callback is supposed to initialize hwaccels (another idiotic
thing now part of the API, but different story). This causes hwdec
initialization sometimes to fail (WolfensteinTwitch.mp4): the first
get_format callback will mark it as failed, so the second get_format
(the "proper" normal one) will not bother restoring the state, and hwdec
init fails.
While this should be fixed in libavcodec (good luck with that), it's
quite easy to workaround.
This was used only by the timestamp sorting code, which is a fallback
for avi files (as well as avi-muxed mkv files). This was supposed to
prevent accumulating timestamps in case the decoder consumes more
packets than it outputs frames (i.e. frames are dropped). This didn't
work very well (timestamps could be off by a large amount), the
estimation of the delay was fragile, and the interdependencies with the
decoder were annoying, so kill it.
This essentially reverts commit 009dfbe3. FFmpeg VideoToolbox support
is being wacky, and can cause major issues, such as not being able
to decode a single frame. (E.g. by playing a .ts file. This should be
fixed in FFmpeg eventually.)
This is not a straight revert of the commit; just a functional one. We
keep the slightly simpler code structure.
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.
VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
Definitely not needed anymore, and fixes a crash in some weird corner-
cases.
The extradata freeing is apparently still needed, though. (Because a
codec context can be opened again, which makes no sense, but ok.)
Usually, libavcodec ignores errors reported by the hardware decoding
API, so it's not like we can actually escape if the hardware is somehow
acting up.
For normal fallback purposes, or if parts of the hw decoding API which
we actually check fails, we do this by setting and checking the
hwdec_failed flag anyway.
The comment was largely outdated, and described the old situation when
we used a "violent" fallback by making get_buffer2 fail completely.
Also, for the case when the hw decoder initialization succeeded (in
get_format), but get_buffer2 for some reason requests something
unexpected, we also can fallback more gracefully and in the same way.
Often, we don't know whether hardware decoding will work until we've
tried. (This used to be different, but API changes and improvements in
libavcodec led to this situation.) We will often output that we're going
to use hardware decoding, and then print a fallback warning.
Instead, print the status once we have decoded a frame.
Some of the old messages are turned into verbose messages, which should
be helpful for debugging. Also add some new ones.
The fallback at initialization time was basically duplicated, maybe for
the sake of showing a different error message. This doesn't matter
anymore; not much can fail at initialization anymore. Most meaningful
and common errors happen either at probing or in get_format (when the
actual hw decoder is initialized).
VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working.
The code needs libavcodec support which was added recently (to FFmpeg git,
libav doesn't support it).
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
Again. With the old OpenGL interop dropped, this probably works better
than vaapi-copy now. Last time we defaulted to vaapi-copy, because the
OpenGL interop could swap U/V planes and other stupid crap. We'll see.
MPlayer traditionally had completely separate sh_ structs for
audio/video/subs, without a good way to share fields. This meant that
fields shared across all these headers had to be duplicated. This commit
deduplicates essentially the last remaining duplicated fields.