The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.
The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)
This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.
Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.
The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.
This is a big refactor. Some things may be untested and could be broken.
This uses ID3D11VideoProcessor to convert the video to a RGBA surface,
which is then bound to ANGLE. Currently ANGLE does not provide any way
to bind nv12 surfaces directly, so this will have to do.
ID3D11VideoContext1 would give us slightly more control about the
colorspace conversion, though it's still not good, and not available
in MinGW headers yet.
The video processor is created lazily, because we need to have the coded
frame size, of which AVFrame and mp_image have no concept of. Doing the
creation lazily is less of a pain than somehow hacking the coded frame
size into mp_image.
I'm not really sure how ID3D11VideoProcessorInputView is supposed to
work. We recreate it on every frame, which is simple and hopefully
doesn't affect performance.
For Mediacodec in particular we don't care about the format. It can just
decode to whatever it wants. The only case we would care about is it not
returning an opaque format if we don't have proper interop, but
libavcodec always returns non-opaque formats by default.
Use the recently added lavc_suffix mechanism to select the wrapper
decoder.
With all hwdec callbacks being optional, and RPI/Mediacodec having only
dummy callbacks, all the callbacks can be removed as well.
The result is that the vd_lavc_hwdec struct for both of them is tiny.
It's better to move them to vd_lavc.c directly, because they are so
trivial and small.
This is intended for cases when --hwdec needs to override the decoder
implementation in use, like for example on the RPI.
It does two things:
1. Allow the hwdec to indicate a decoder suffix. libavcodec by
convention adds a suffix to all wrapper decoders, and here we start
relying on it. While not necessarily the best idea, it's the only
thing we got. libavcodec's hwaccel list is useless, because it only
has the codec ID, not the associated decoder's name.
2. Make --hwdec=auto work properly. It shouldn't fail anymore, and hwdec
probing should reliably work, even if a different decoder is selected
with --vd. The semantics of --hwdec should dictate that it overrides
the default decoder.
Until now, the presence of the process_image() callback was used to set
a delay queue with a hardcoded size. Change this to a vd_lavc_hwdec
field instead, so the decoder can explicitly set this if it's really
needed.
Do this so process_image() can be used in the VideoToolbox glue code for
something entirely unrelated.
Some functions which expected a codec name (i.e. the name of the video
format itself) were passed a decoder name. Most "native" libavcodec
decoders have the same name as the codec, so this was never an issue.
This should mean that e.g. using "--vd=lavc:h264_mmal --hwdec=mmal"
should now actually enable native surface mode (instead of doing copy-
back).
AVFormatContext.codec is deprecated now, and you're supposed to use
AVFormatContext.codecpar instead.
Handle this for all of the normal playback code.
Encoding mode isn't touched.
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
The mp_set_av_packet()/mp_pts_from_av() functions check whether the
timebase is set at all (i.e. AVRational.num!=0), so there's no need to
fiddle with pointers.
Completely pointless abominations that FFmpeg refuses to remove. They
are ancient, long deprecated API which we can't use anymore. They
confused users as well.
Pretend that they don't exist. Due to the way --vd works, they can't
even be forced anymore. The older hack which explicitly rejects these
can be dropped as well.
This is in preparation for a hypothetical API change in libavcodec,
which would allow the decoder to return multiple video frames before
accepting a new input packet.
In theory, the body of the if() added to vd_lavc.c could be replaced
with this code:
packet->buffer += ret;
packet->len -= ret;
but currently this is not needed, as libavformat already outputs one
frame per packet. Also, using libavcodec this way could lead to a
"deadlock" if the decoder refuses to consume e.g. garbage padding, so
enabling this now would introduce bugs.
(Adding this now for easier testing, and for symmetry with the audio
code.)
Until now (and in mplayer traditionally), avi timestamps were handled
with a timestamp FIFO. AVI timestamps are essentially just strictly
increasing frame numbers and are not reordered like normal timestamps.
Limiting the FIFO is required because frames can be dropped. To make
it worse, frame dropping can't be distinguished from the decoder not
returning output due to increasing the buffering required for B-frames.
("Measuring" the buffering at playback start seems like an interesting
idea, but won't work as the buffering could be increased mid-playback.)
Another problem are skipped frames (packets with data, but which do
not contain a video frame).
Besides dropped and skipped frames, there is the problem that we can't
always know the delay. External decoders like MMAL are not going to
tell us. (And later perhaps others, like direct VideoToolbox usage.)
In general, this works not-well enough that I prefer the solution of
passing through AVI timestamps as DTS. This is slightly incorrect,
because most decoders treat DTS as mpeg-style timestamps, which
already include a b-frame delay, and thus will be shifted by a few
frames. This means there will be a problem with A/V sync in some
situations.
Note that the FFmpeg AVI demuxer shifts timestamps by an additional
amount (which increases after the first seek!?!?), which makes the
situation worse. It works well with VfW-muxed Matroska files, though.
On RPI, the first X timestamps are broken until the MMAL decoder "locks
on".
fd339e3f53 introduced a regression that caused
segfault while uniniting dxva2 decoder (and possibly vdpau too). The problem was
that it freed the avctx earlier, before calling the backend-specific uninit
which referenced it.
Revert some of the changes of that commit, and avoid calling flush by
checking whether the codec is open instead.
(Based on a PR by Kevin Mitchell.)
Signed-off-by: wm4 <wm4@nowhere>
It can be "dangerous". In particular, the decoder might have failed to
initialize, and is now in a broken state. avcodec_flush_buffers() is not
expected to be called in this state, and could trigger undefined
behavior.
Avoids "problems". In particular, it makes MMAL output a NOPTS timestamp
if the input timestamp was NOPTS.
Don't do it for other decoders. Ideally, we will at some point in the
future switch to integer fractions for timestamps at least up until the
filter layer. But this would be a larger change, and for now I'd prefer
keeping the not-rounded demuxer timestamps (if we have them).
Commit b53cb8de added a delay queue for decoded frames. This is supposed
to be used with copy-back decoders like dxva2-copy and vaapi-copy.
Surfaces returned by them can't be referenced after uninitializing the
decoders, so they have to be released before destroying the decoder.
Move the flush_all() call above decoder uninit accordingly. Also move
the destruction of the AVFrame used for decoding (just for being
defensive - normally it doesn't hold any reference).
We just need to provide an entrypoint for it, and move the main init
code to a separate function. This gets rid of the messy video chain full
reinit in command.c, which completely destroyed and recreated the video
state for the purpose of mid-stream hw/sw switching.
Don't give the "software_fallback_decoder" field special meaning. Alwass
set it, and rename it to "decoder". Whether hw decoding is used is
determined by the "hwdec" field already.
This is mainly a refactor. I'm hoping it will make some things easier
in the future due to cleanly separating codec metadata and stream
metadata.
Also, declare that the "codec" field can not be NULL anymore. demux.c
will set it to "" if it's NULL when added. This gets rid of a corner
case everything had to handle, but which rarely happened.
MPlayer traditionally always used the display aspect ratio, e.g. 16:9,
while FFmpeg uses the sample (aka pixel) aspect ratio.
Both have a bunch of advantages and disadvantages. Actually, it seems
using sample aspect ratio is generally nicer. The main reason for the
change is making mpv closer to how FFmpeg works in order to make life
easier. It's also nice that everything uses integer fractions instead
of floats now (except --video-aspect option/property).
Note that there is at least 1 user-visible change: vf_dsize now does
not set the display size, only the display aspect ratio. This is
because the image_params d_w/d_h fields did not just set the display
aspect, but also the size (except in encoding mode).
If reinit after a fallback from hardware fails, this field can be NULL.
The check in control() was broken due to a typo (found by Coverity), and
decode() lacked the check entirely.
Approximately reverts commit 3ccac74d. This failed with some avi files,
which do pseudo-VFR by sending packets with empty frames (or repeat
frames, depending on point of view). Specifically, these packets are not
0 bytes, so they don't get skipped by libavformat, as with the usual VFR
avi hack. Instead, the packet contains a VOP with vop_coded=0, so
libavcodec will just return no frame. We could probably distinguish such
skipped frames and delayed frames by explicitly measuring the codec
delay by counting how long it takes to get the very first frame (and
then treat skips as explicit drops), but we may as well simply reinstate
the old code.
To appease to at least one semi-broken case, do not enable this logic on
the RPI, as the FFmpeg MMAL wrapper has arbitrary buffering (and MMAL
itself is asynchronous).
Until now, we've relied on the following things:
- you can send flush packets to the decoder even if it's fully flushed,
- you can send new packets to a flushed decoder,
- you can send new packers to a partially flushed decoder.
("flushing" refers to sending flush packets to the decoder until the
decoder does not return new pictures, not avcodec_flush_buffers().)
All of these are questionable. The libavcodec API probably doesn't
guarantee that these work well or at all, even though most decoders have
no issue with these. But especially with hardware decoding wrappers
(like MMAL), real problems can be expected. Isolate us from these corner
cases by handling them explicitly.
A hw decoder might fail to decode a frame for multiple reasons, and not
always just because decoding is impossible. We can't generally
distinguish these reasons well. Make it more tolerant by accepting
failures of 3 frames, but not more. The threshold can be adjusted by the
repurposed --vd-lavc-software-fallback option.
(This behavior was suggested much earlier in some PR, but at the time
the "proper" hwdec fallback was indistinguishable from decoding error.
With the current situation, "proper" fallback is still instantious.)
The uninit() function was called twice if the uninit() function failed
(once by init(), once by vd_lavc.c code), which caused crashes due to
double-free. (This failure is a corner case, and all other hwdec
backends appear to handle this case gracefully.)
I do not think this code should be able to deal with uninit() being
called other than once. Guarantee that it's called exactly once.
Fixes linker failure. How did this ever work? Apparently it did most of
the time, but apparently we just got the first case where it didn't.
Fixes#2433.
The previous commit moved the av_frame_unref() after the got_picture
check. This accidentally also deferred the software fallback
reinitialization to until a software picture was decoded (instead of the
exact time of the fallback), which is not ideal.
Just rely on the fact that calling av_frame_unref() on a frame is ok
even if nothing was decoded.
Commit 12cd48a8 started setting the hwdec_failed field even if hwdec was
not active, and because it also checked this field even if hwdec was not
active, broke decoding forever.
Fix this, and also avoid a memory leak or API misuse by releasing the
decoded picture. Passing an unreleased frame to the decoder has as far
as I know no defined effects.
The libavcodec h264 decoder contains some idiotic code with unknown
purpose (no sample or explanation known that necessitates its
existence), that causes the AVCodecContext.get_format callback to be
invoked at a time when hwaccels can't be initialized. By definition, the
get_format callback is supposed to initialize hwaccels (another idiotic
thing now part of the API, but different story). This causes hwdec
initialization sometimes to fail (WolfensteinTwitch.mp4): the first
get_format callback will mark it as failed, so the second get_format
(the "proper" normal one) will not bother restoring the state, and hwdec
init fails.
While this should be fixed in libavcodec (good luck with that), it's
quite easy to workaround.
This was used only by the timestamp sorting code, which is a fallback
for avi files (as well as avi-muxed mkv files). This was supposed to
prevent accumulating timestamps in case the decoder consumes more
packets than it outputs frames (i.e. frames are dropped). This didn't
work very well (timestamps could be off by a large amount), the
estimation of the delay was fragile, and the interdependencies with the
decoder were annoying, so kill it.
This essentially reverts commit 009dfbe3. FFmpeg VideoToolbox support
is being wacky, and can cause major issues, such as not being able
to decode a single frame. (E.g. by playing a .ts file. This should be
fixed in FFmpeg eventually.)
This is not a straight revert of the commit; just a functional one. We
keep the slightly simpler code structure.
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.
VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
Definitely not needed anymore, and fixes a crash in some weird corner-
cases.
The extradata freeing is apparently still needed, though. (Because a
codec context can be opened again, which makes no sense, but ok.)
Usually, libavcodec ignores errors reported by the hardware decoding
API, so it's not like we can actually escape if the hardware is somehow
acting up.
For normal fallback purposes, or if parts of the hw decoding API which
we actually check fails, we do this by setting and checking the
hwdec_failed flag anyway.
The comment was largely outdated, and described the old situation when
we used a "violent" fallback by making get_buffer2 fail completely.
Also, for the case when the hw decoder initialization succeeded (in
get_format), but get_buffer2 for some reason requests something
unexpected, we also can fallback more gracefully and in the same way.
Often, we don't know whether hardware decoding will work until we've
tried. (This used to be different, but API changes and improvements in
libavcodec led to this situation.) We will often output that we're going
to use hardware decoding, and then print a fallback warning.
Instead, print the status once we have decoded a frame.
Some of the old messages are turned into verbose messages, which should
be helpful for debugging. Also add some new ones.
The fallback at initialization time was basically duplicated, maybe for
the sake of showing a different error message. This doesn't matter
anymore; not much can fail at initialization anymore. Most meaningful
and common errors happen either at probing or in get_format (when the
actual hw decoder is initialized).
VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working.
The code needs libavcodec support which was added recently (to FFmpeg git,
libav doesn't support it).
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
Again. With the old OpenGL interop dropped, this probably works better
than vaapi-copy now. Last time we defaulted to vaapi-copy, because the
OpenGL interop could swap U/V planes and other stupid crap. We'll see.
MPlayer traditionally had completely separate sh_ structs for
audio/video/subs, without a good way to share fields. This meant that
fields shared across all these headers had to be duplicated. This commit
deduplicates essentially the last remaining duplicated fields.
When using --hwdec=auto, about half of all systems will print:
"[vdpau] Error when calling vdp_device_create_x11: 1"
this happens because usually mpv will be linked against both vdpau and
vaapi libs, but the drivers are not necessarily available. Then trying
to load a driver will fail. This is a normal part of probing, but the
error messages were printed anyway. Silence them by explicitly
distinguishing probing.
This pretty much goes through all the layers. We actually consider
loading hw backends for vo_opengl always "auto probed", even if a hw
backend is explicitly requested. In this case vd_lavc will print a
warning message anyway (adjust this message a bit).
On hw decoder reinit failure we did not actually always return a sw
format, because the first format (fmt[0]) is not always a sw format.
This broke some cases of fallback. We must go through the trouble to
determine the first actual sw format.
Yet another of these dozens of hwaccel changes. This time, libavcodec
provides utility functions, which initialize the vdpau decoder and map
codec profiles. So a lot of work the API user had to do falls away.
This also will give us support for high bit depth profiles, and possibly
HEVC once libavcodec supports it.
...instead of relying on the hw decoding API to align it for us. The old
method could in theory have gone wrong if the video is cropped by an
amount large enough to step over several blocks.
There's not much of a reason to keep get_surface_hwdec() and
get_buffer2_hwdec() separate. Actually, the way the mpi->AVFrame
referencing is done makes this confusing. The separation is probably
an artifact of the pre-libavcodec-refcounting compatibility glue.
Most of hardware decoding is initialized lazily. When the first packet
is parsed, libavcodec will call get_format() to check whether hw or sw
decoding is wanted. Until now, we've returned AV_PIX_FMT_NONE from
get_format() if hw decoder initialization failed. This caused the
avcodec_decode_video2() call to fail, which in turn let us trigger the
fallback. We didn't return a sw format from get_format(), because we
didn't want to continue decoding at all. (The reason being that full
reinitialization is more robust when continuing sw decoding.)
This has some disadvantages. libavcodec vomited some unwanted error
messages. Sometimes the failures are more severe, like it happened with
HEVC. In this case, the error code path simply acted up in a way that
was extremely inconvenient (and had to be fixed by myself). In general,
libavcodec is not designed to fallback this way.
Make it a bit less violent from the API usage point of view. Return a sw
format if hw decoder initialization fails. In this case, we let
get_buffer2() call avcodec_default_get_buffer2() as well. libavcodec is
allowed to perform its own sw fallback. But once the decode function
returns, we do the full reinitialization we wanted to do.
The result is that the fallback is more robust, and doesn't trigger any
decoder error codepaths or messages either. Change our own fallback
message to a warning, since there are no other messages with error
severity anymore.
Remove the old implementation for these properties. It was never very
good, often returned very innaccurate values or just 0, and was static
even if the source was variable bitrate. Replace it with the
implementation of "packet-video-bitrate". Mark the "packet-..."
properties as deprecated. (The effective difference is different
formatting, and returning the raw value in bits instead of kilobits.)
Also extend the documentation a little.
It appears at least some decoders (sipr?) need the
AVCodecContext.bit_rate field set, so this one is still passed through.
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.
Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.
This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.
vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.
This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.
This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.
Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
Codecs for hardware acceleration are not blacklisted, but whitelisted.
Also, if this emssage is printed, the codec might not have any hardware
acceleration support in the first place.
Instead of "vaapi", simply by changing the probe order.
"vaapi" uses the GLX GL interop, which has causing us more problems than
it solved.
Unfortunately this leads also to copying if "--hwdec=auto --vo=vaapi" is
used, even though GLX is not involved in this case - but I don't care
enough to make the probe logic cleverer just for this. You can still get
the zero-copy path with --hwdec=vaapi.
Breaks vo_opengl by default. I'm hot able to fix this myself, because I
have no clue about the overcomplicated color management logic. Also,
whilethis is apparently caused by commit fbacd5, the following commits
all depend on it, so revert them too.
This reverts the following commits:
e141caa97d653b0dd529729c8b3f64fbacd5de31Fixes#1636.
Remove coded_width and coded_height. This was originally added in commit
fd7dde40, when BITMAPINFOHEADER was killed. The separate fields became
redundant in commit e68f4be1. Remove them (nothing passed to the
decoders actually changes with _this_ commit).
A recent behavior change in libavcodec's h264 decoder keeps at least 1
surface even after avcodec_flush_buffers() has been called. We used to
flush the decoder in order to make sure all surfaces are free'd, so that
the hw decoder can be safely uninitialized. This doesn't work anymore.
Fix it by closing the AVCodecContext before the hw decoder is
uninitialized. This is actually simpler and more robust. It seems to be
well-supported too.
Fixes invalid read accesses with vaapi-copy and dxva2-copy. These
destroyed the hwdec API fully on uninit, and could not deal with
surfaces surviving the decoder.
Probably fixes#1587.
The intention is that we can test vo_opengl with high bit depth PNGs
better. This throws libswscale completely out of the loop, which before
was needed in order to convert from big endian to little endian.
Also apply a minimal cleanup to fmt-conversion.c (unrelated).
This was once central, but now it's almost unused. Only vf_divtc still
uses it for extremely weird and incomprehensible reasons. The use in
stream.c is trivial. Replace these, and remove mpbswap.h.
MPlayer traditionally did this because it made sense: the most important
formats (avi, asf/wmv) used Microsoft formats, and many important
decoders (win32 binary codecs) also did. But the world has changed, and
I've always wanted to get rid of this thing from the codebase.
demux_mkv.c internally still uses it, because, guess what, Matroska has
a VfW muxing mode, which uses these data structures natively.
This inserts an automatic conversion filter if a Matroska file is marked
as 3D (StereoMode element). The basic idea is similar to video rotation
and colorspace handling: the 3D mode is added as a property to the video
params. Depending on this property, a video filter can be inserted.
As of this commit, extending mp_image_params is actually completely
unnecessary - but the idea is that it will make it easier to integrate
with VOs supporting stereo 3D mogrification. Although vo_opengl does
support some stereo rendering, it didn't support the mode my sample file
used, so I'll leave that part for later.
Not that most mappings from Matroska mode to vf_stereo3d mode are
probably wrong, and some are missing.
Assuming that Matroska modes, and vf_stereo3d in modes, and out modes
are all the same might be an oversimplification - we'll see.
See issue #1045.
bstr.c doesn't really deserve its own directory, and compat had just
a few files, most of which may as well be in osdep. There isn't really
any justification for these extra directories, so get rid of them.
The compat/libav.h was empty - just delete it. We changed our approach
to API compatibility, and will likely not need it anymore.
So talking to a certain Intel dev, it sounded like modern VA-API drivers
are reasonable thread-safe. But apparently that is not the case. Not at
all. So add approximate locking around all vaapi API calls.
The problem appeared once we moved decoding and display to different
threads. That means the "vaapi-copy" mode was unaffected, but decoding
with vo_vaapi or vo_opengl lead to random crashes.
Untested on real Intel hardware. With the vdpau emulation, it seems to
work fine - but actually it worked fine even before this commit, because
vdpau was written and designed not by morons, but competent people
(vdpau is guaranteed to be fully thread-safe).
There is some probability that this commit doesn't fix things entirely.
One problem is that locking might not be complete. For one, libavcodec
_also_ accesses vaapi, so we have to rely on our own guesses how and
when lavc uses vaapi (since we disable multithreading when doing hw
decoding, our guess should be relatively good, but it's still a lavc
implementation detail). One other reason that this commit might not
help is Intel's amazing potential to fuckup anything that is good and
holy.
Completely useless, and could accidentally be enabled by cycling
framedrop modes. Just get rid of it.
But still allow triggering the old code with --vd-lavc-framedrop, in
case someone asks for it. If nobody does, this new option will be
removed eventually.
Use OPT_KEYVALUELIST() for all places where AVOptions are directly set
from mpv command line options. This allows escaping values, better
diagnostics (also no more "pal"), and somehow reduces code size.
Remove the old crappy option parser (av_opts.c).
This add support for reading primary information from lavc, categorized
into BT.601-525, BT.601-625, BT.709 and BT.2020; and passes it on to the
vo. In vo_opengl, we always generate the 3dlut against the wider BT.2020
and transform our source into this colorspace in the shader.
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
This means use of the min/max fields can be dropped for the flag option
type, which makes some things slightly easier. I'm also not sure if the
client API handled the case of flag not being 0 or 1 correctly, and this
change gets rid of this concern.
While I'm not very fond of "const", it's important for declarations
(it decides whether a symbol is emitted in a read-only or read/write
section). Fix all these cases, so we have writeable global data only
when we really need.
mpv supports two hardware decoding APIs on Linux: vdpau and vaapi. Each
of these has emulation wrappers. The wrappers are usually slower and
have fewer features than their native opposites. In particular the libva
vdpau driver is practically unmaintained.
Check the vendor string and print a warning if emulation is detected.
Checking vendor strings is a very stupid thing to do, but I find the
thought of people using an emulated API for no reason worse.
Also, make --hwdec=auto never use an API that is detected as emulated.
This doesn't work quite right yet, because once one API is loaded,
vo_opengl doesn't unload it, so no hardware decoding will be used if the
first probed API (usually vdpau) is rejected. But good enough.
Instead of doing it on every seek (libavcodec calls get_format on every
seek), reinitialize the decoder only if the video resolution changes.
Note that this may be relatively naive, since we e.g. (or: in
particular) don't check for profile changes. But it's not worse than the
state before the get_format change, and at least it paints over the
current vaapi breakage (issue #646).
This "sometimes" crashed when seeking. The fault apparently lies in
libavcodec: the decoder returns an unreferenced frame! This is
completely insane, but somehow I'm apparently still expected to
work this around. As a reaction, I will drop Libav 9 support in the
next commit. (While this commit will go into release/0.3.)
Apparently the "right" place to initialize the hardware decoder is in
the libavcodec get_format callback.
This doesn't change vda.c and vdpau_old.c, because I don't have OSX, and
vdpau_old.c is probably going to be removed soon (if Libav ever manages
to release Libav 10). So for now the init_decoder callback added with
this commit is optional.
This also means vdpau.c and vaapi.c don't have to manage and check the
image parameters anymore.
This change is probably needed for when libavcodec VDA supports gets a
new iteration of its API.
Like with the previous commit, this is probably not needed, but it's
unclear whether that really is the case. Most likely, it used to be
needed by some demuxer, and now the only demuxer left that could
_possibly_ trigger this is demux_mkv.c.
Note that mjpeg is the only decoder that reads the extra_huff option,
and nothing in libavformat actually sets the option. So maybe it's
fundamentally not needed anymore.
This case can't happen with the normal realvideo codepath in
demux_mkv.c, because the code would errors out if the extradata is too
small, and everything would be broken anyway in the case the vd_lavc.c
condition is actually triggered.
It still might happen with VfW-muxed realvideo in Matroska, though.
Basically, I'm hoping this doesn't matter anyway, and that the vd_lavc.c
code was for other old demuxers, like demux_avi or demux_rm. Following
the commit history, it's not really clear for what demuxer this code
was added.
Set the flag CODEC_FLAG_OUTPUT_CORRUPT by default. Note that there is
also CODEC_FLAG2_SHOW_ALL, which is older, but this seems to be ffmpeg
only.
Note that whether you want this enabled depends on the user. Some might
prefer that only good frames are output, while others want the decoder
to try as hard as possible to output _anything_. Since mplayer/mpv is
rather the kind of player that tries hard instead of being "clever", set
the new default to override libavcodec's default.
A nice way to test this is switching video tracks. Since mpv doesn't
wait for the next key frame, it'll start feeding the decoder with a
packet from the middle of the stream.
Since m_option.h and options.h are extremely often included, a lot of
files have to be changed.
Moving path.c/h to options/ is a bit questionable, but since this is
mainly about access to config files (which are also handled in
options/), it's probably ok.
The tmsg stuff was for the internal gettext() based translation system,
which nobody ever attempted to use and thus was removed. mp_gtext() and
set_osd_tmsg() were also for this.
mp_dbg was once enabled in debug mode only, but since we have log level
for enabling debug messages, it seems utterly useless.
This should help fixing some issues (like not draining video frames
correctly on reinit), as well as decoupling the decoder, filter chain,
and VO code.
I also wanted to make the hardware video decoding fallback work properly
if software-only video filters are inserted. This currently has the
issue that the fallback is too violent, and throws away a bunch of
demuxer packets needed to restart software decoding properly. But
keeping "backup" packets turned out as too hacky, so I'm not doing this,
at least not yet.
This adds vf_chain, which unlike vf_instance refers to the filter chain
as a whole. This makes the filter API less awkward, and will allow
handling format negotiation better.
If the timebase is set, it's used for converting the packet timestamps.
Otherwise, the previous method of reinterpret-casting the mpv style
double timestamps to libavcodec style int64_t timestamps is used.
Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by
mp_pts_from_av(), which simply converts timestamps in a way the old
function did. (Plus it takes a timebase parameter, similar to the
addition to mp_set_av_packet().)
Note that this should not change anything yet. The code in ad_lavc.c and
vd_lavc.c passes NULL for the timebase parameters. We could set
AVCodecContext.pkt_timebase and use that if we want to give libavcodec
"proper" timestamps.
This could be important for ad_lavc.c: some codecs (opus, probably mp3
and aac too) have weird requirements about doing decoding preroll on the
container level, and thus require adjusting the audio start timestamps
in some cases. libavcodec doesn't tell us how much was skipped, so we
either get shifted timestamps (by the length of the skipped data), or we
give it proper timestamps. (Note: libavcodec interprets or changes
timestamps only if pkt_timebase is set, which by default it is not.)
This would require selecting a timebase though, so I feel uncomfortable
with the idea. At least this change paves the way, and will allow some
testing.
PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants)
enum PixelFormat -> enum AVPixelFormat
Losen some version checks in certain newer pixel formats.
av_pix_fmt_descriptors -> av_pix_fmt_desc_get
This removes support for FFmpeg 1.0.x, which is even older than
Libav 9.x. Support for it probably was already broken, and its
libswresample was rejected by our build system anyway because it's
broken.
Mostly untested; it does compile with Libav 9.9.
Refactor the PTS handling code to make it cleaner, and to separate the
bits that use PTS sorting.
Add a heuristic to fall back to DTS if the PTS us non-monotonic. This
code is based on what FFmpeg/Libav use for ffplay/avplay and also
best_effort_timestamp (which is only in FFmpeg). Basically, this 1. just
uses the DTS if PTS is unset, and 2. ignores PTS entirely if PTS is non-
monotonic, but DTS is sorted.
The code is pretty much the same as in Libav [1]. I'm not sure if all of
it is really needed, or if it does more than what the paragraph above
mentions. But maybe it's fine to cargo-cult this.
This heuristic fixes playback of mpeg4 in ogm, which returns packets
with PTS==DTS, even though the PTS timestamps should follow codec
reordering. This is probably a libavformat demuxer bug, but good luck
trying to fix it.
The way vd_lavc.c returns the frame PTS and DTS to dec_video.c is a bit
inelegant, but maybe better than trying to mess the PTS back into the
decoder callback again.
[1] https://git.libav.org/?p=libav.git;a=blob;f=cmdutils.c;h=3f1c667075724c5cde69d840ed5ed7d992898334;hb=fa515c2088e1d082d45741bbd5c05e13b0500804#l1431
These used the suffix _resync_stream, which is a bit misleading. Nothing
gets "resynchronized", they really just reset state.
(Some audio decoders actually used to "resync" by reading packets for
resuming playback, but that's not the case anymore.)
Also move the function in dec_video.c to the top of the file.
Having the DTS directly can be useful for restoring PTS values.
The avi file format doesn't actually store PTS values, just DTS. An
older hack explicitly exported the DTS as PTS (ignoring the [I assume]
genpts generated non-sense PTS), which is not necessary anymore due to
this change.
Instead of passing the PTS as separate field, pass it as part of the
usual data structures. Basically, this removes strange artifacts from
the API. (It's not finished, though: the final decoded PTS goes through
strange paths, and filter_video() finally overwrites the decoded
mp_image's pts field with it.)
We also stop using libavcodec's reordered_opaque fields, and use
AVPacket.pts and AVFrame.pkt_pts. This is slightly unorthodox, because
these pts fields are not "really" opaque anymore, yet we treat them as
such. But the end result should be the same, and reordered_opaque is
marked as partially deprecated (it's not clear whether it's really
deprecated).
When mpv is started with some video filters set (--vf is used), and
hardware decoding is requested, and hardware decoding would be possible,
but is prevented due to video filters that accept software formats only,
the fallback didn't work properly sometimes.
This fallback works rather violently: it tries to initialize the filter
chain, and if it fails it throws away the frame decoded using the
hardware, and retries with software. The case that didn't work was when
decoding the current packet didn't immediately lead to a new frame. Then
the filter chain wouldn't be reinitialized, and the playloop would stop
playback as soon as it encounters the error flag.
Fix this by resetting the filter error flag (back to "uninitialized"),
which is a rather violent, but somewhat working solution.
The fallback in general should perhaps be cleaned up later.
Now the actual decoder doesn't need to care about this anymore, and it's
handled in generic code instead. This simplifies vd_lavc.c, and in
particular we don't need to detect format changes in the old way
anymore.
The only reason why these structs were dynamically allocated was to
avoid recursive includes in stheader.h, which is (or was) a very central
file included by almost all other files. (If a struct is referenced via
a pointer type only, it can be forward referenced, and the definition of
the struct is not needed.) Now that they're out of stheader.h, this
difference doesn't matter anymore, and the code can be simplified.
Also sneak in some sanity checks.
This used to be needed to access the generic stream header from the
specific headers, which in turn was needed because the decoders had
access only to the specific headers. This is not the case anymore, so
this can finally be removed again.
Also move the "format" field from the specific headers to sh_stream.
This is similar to the sh_audio commit.
This is mostly cosmetic in nature, except that it also adds automatical
freeing of the decoder driver's state struct (which was in
sh_video->context, now in dec_video->priv).
Also remove all the stheader.h fields that are not needed anymore.