If the timebase is set, it's used for converting the packet timestamps.
Otherwise, the previous method of reinterpret-casting the mpv style
double timestamps to libavcodec style int64_t timestamps is used.
Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by
mp_pts_from_av(), which simply converts timestamps in a way the old
function did. (Plus it takes a timebase parameter, similar to the
addition to mp_set_av_packet().)
Note that this should not change anything yet. The code in ad_lavc.c and
vd_lavc.c passes NULL for the timebase parameters. We could set
AVCodecContext.pkt_timebase and use that if we want to give libavcodec
"proper" timestamps.
This could be important for ad_lavc.c: some codecs (opus, probably mp3
and aac too) have weird requirements about doing decoding preroll on the
container level, and thus require adjusting the audio start timestamps
in some cases. libavcodec doesn't tell us how much was skipped, so we
either get shifted timestamps (by the length of the skipped data), or we
give it proper timestamps. (Note: libavcodec interprets or changes
timestamps only if pkt_timebase is set, which by default it is not.)
This would require selecting a timebase though, so I feel uncomfortable
with the idea. At least this change paves the way, and will allow some
testing.
The harder work was done in the previous commits. After that this feature comes
out almost for free.
The only problem is I can't get the textures created with CGLTexImageIOSurface2D
to download properly, thus the code performs download using some CoreVideo APIs.
If someone knows why download of textures created with CGLTexImageIOSurface2D
doesn't work please contact me :)
PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants)
enum PixelFormat -> enum AVPixelFormat
Losen some version checks in certain newer pixel formats.
av_pix_fmt_descriptors -> av_pix_fmt_desc_get
This removes support for FFmpeg 1.0.x, which is even older than
Libav 9.x. Support for it probably was already broken, and its
libswresample was rejected by our build system anyway because it's
broken.
Mostly untested; it does compile with Libav 9.9.
The old ffmpeg vdpau support code uses separate vdpau pixel formats for
each decoder (pretty much because mplayer's architecture sucked), which
just gets into the way. Force the old decoder's output to IMGFMT_VDPAU,
and remove IMGFMT_IS_VDPAU() where we can remove it.
This should completely remove the differences betwene the old and new
vdpau decoder outside of the decoder.
PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not
sure why they were even added, and they sound like they are just for
compatibility with XvMC-style decoding, which sucks anyway.
Now that there's only a single vaapi format, remove the
IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was
unused.
pthreads should be available anywhere. Even if not, for environment
without threads a pthread wrapper could be provided that can't actually
start threads, thus disabling features that require threads.
Make pthreads mandatory in order to simplify build dependencies and to
reduce ifdeffery. (Admittedly, there wasn't much complexity, but maybe
we will use pthreads more in the future, and then it'd become a real
bother.)
So, FFmpeg/Libav requires us to figure out video timestamps ourselves
(see last 10 commits or so), but the methods it provides for this aren't
even sufficient. In particular, everything that uses AVI-style DTS (avi,
vfw-muxed mkv, possibly mpeg4-in-ogm) with a codec that has an internal
frame delay is broken. In this case, libavcodec will shift the packet-
to-image correspondence by the codec delay, meaning that with a delay=1,
the first AVFrame.pkt_dts is not 0, but that of the second packet. All
timestamps will appear shifted. The start time (e.g. the time displayed
when doing "mpv file.avi --pause") will not be exactly 0.
(According to Libav developers, this is how it's supposed to work; just
that the first DTS values are normally negative with formats that use
DTS "properly". Who cares if it doesn't work at all with very common
video formats? There's no indication that they'll fix this soon,
either. An elegant workaround is missing too.)
Add a hack to re-enable the old PTS code for AVI and vfw-muxed MKV.
Since these timestamps are not reorderd, we wouldn't need to sort them,
but it's less code this way (and possibly more robust, should a demuxer
unexpectedly output PTS).
The original intention of all the timestamp changes recently was
actually to get rid of demuxer-specific hacks and the old timestamp
sorting code, but it looks like this didn't work out. Yet another case
where trying to replace native MPlayer functionality with FFmpeg/Libav
led to disadvantages and bugs. (Note that the old PTS sorting code
doesn't and can't handle frame dropping correctly, though.)
Bug reports:
https://trac.ffmpeg.org/ticket/3178https://bugzilla.libav.org/show_bug.cgi?id=600
Using --start with files that use DTS only, or which simply have broken
PTS timestamps, would incorrectly drop frames and possibly not execute
the seek correctly.
Add yet another heuristic to detect this. The intent is that --start and
hr-seeks in general should work correctly, but in order to keep things
fast, we still want to allow frame dropping during hr-seek if there are
no problems doing so. Do this by disabling frame dropping by default,
but re-enabling it if there are no problems found for a while. As a
consequence, --start might be somewhat slower, but normal user
interaction should remain as fast as before.
Note that there's something subtle about the added code: the
has_broken_packet_pts field is checked even before the first packet is
fed to dec_video.c, so the field must not be set to 0 right on start.
It's not initially set to 0 anyway, because the heuristic requires
decoding some images before enabling frame drop anyway.
Note 2: it's not clear whether frame dropping during hr-seek really
helps; I didn't benchmark it.
The d_video->pts field was a bit strange. The code overwrote it multiple
times (on decoding, on filtering, then once again...), and it wasn't
really clear what purpose this field had exactly. Replace it with the
mpctx->video_next_pts field, which is relatively unambiguous.
Move the decreasing PTS check to dec_video.c. This means it acts on
decoder output, not on filter output. (Just like in the previous commit,
assume the filter chain is sane.) Drop the jitter vs. reset semantics;
the dec_video.c determined PTS never goes backwards, and demuxer
timestamps don't "jitter".
Also get rid of the PTS check _after_ filters. This means if there's a
video filter which unsets PTS, no warning will be printed. But we assume
that all filters are well-behaved enough by now.
Refactor the PTS handling code to make it cleaner, and to separate the
bits that use PTS sorting.
Add a heuristic to fall back to DTS if the PTS us non-monotonic. This
code is based on what FFmpeg/Libav use for ffplay/avplay and also
best_effort_timestamp (which is only in FFmpeg). Basically, this 1. just
uses the DTS if PTS is unset, and 2. ignores PTS entirely if PTS is non-
monotonic, but DTS is sorted.
The code is pretty much the same as in Libav [1]. I'm not sure if all of
it is really needed, or if it does more than what the paragraph above
mentions. But maybe it's fine to cargo-cult this.
This heuristic fixes playback of mpeg4 in ogm, which returns packets
with PTS==DTS, even though the PTS timestamps should follow codec
reordering. This is probably a libavformat demuxer bug, but good luck
trying to fix it.
The way vd_lavc.c returns the frame PTS and DTS to dec_video.c is a bit
inelegant, but maybe better than trying to mess the PTS back into the
decoder callback again.
[1] https://git.libav.org/?p=libav.git;a=blob;f=cmdutils.c;h=3f1c667075724c5cde69d840ed5ed7d992898334;hb=fa515c2088e1d082d45741bbd5c05e13b0500804#l1431
These used the suffix _resync_stream, which is a bit misleading. Nothing
gets "resynchronized", they really just reset state.
(Some audio decoders actually used to "resync" by reading packets for
resuming playback, but that's not the case anymore.)
Also move the function in dec_video.c to the top of the file.
This means the code that tries to figure out the timestamp from
demuxer and decoder output is now all in dec_video.c. We set the
final timestamp on the returned image (mp_image.pts), as well as
the d_video->pts field.
The way the player uses d_video->pts field is still a bit messy. Maybe
this could be cleaned up later.
It appears PTS sorting was useful only for avi files (and VfW-muxed
mkv). Maybe it was historically also important for decoders with broken
or non-existent PTS reordering (win32 codecs?). But now that we handle
demuxers which outputs DTS only correctly, it just seems dead weight.
Disable it by default. The --pts-association-mode option is now forced
to always use the decoder's PTS value. You can still enable the old
default (auto) or force sorting. But we will probably remove this option
entirely at some point.
Make demux_mkv export timestamps at DTS when it's in VfW mode. This is
needed to get correct timestamps with the new default mode. demux_lavf
already does that.
Having the DTS directly can be useful for restoring PTS values.
The avi file format doesn't actually store PTS values, just DTS. An
older hack explicitly exported the DTS as PTS (ignoring the [I assume]
genpts generated non-sense PTS), which is not necessary anymore due to
this change.
Now the --no-correct-pts mode is like the normal mode, just with
different timestamp calculations. The semantics should be about the
same as before this commit.
Before this commit, this mode estimated the frame time by subtracting
successive packet PTS values. This is complete non-sense for video
codecs which use reordering. The code compensated frame times for these
non-sense using the FPS value, but confused the rest of the player with
non-sense jumping around timestamps. So, all in all this mode is not
very useful.
Repurpose this mode for fixed frame rate playback. This gives almost the
same behavior as the old mode with forced framerate (--fps option). The
result is simpler and often more robust.
Instead of passing the PTS as separate field, pass it as part of the
usual data structures. Basically, this removes strange artifacts from
the API. (It's not finished, though: the final decoded PTS goes through
strange paths, and filter_video() finally overwrites the decoded
mp_image's pts field with it.)
We also stop using libavcodec's reordered_opaque fields, and use
AVPacket.pts and AVFrame.pkt_pts. This is slightly unorthodox, because
these pts fields are not "really" opaque anymore, yet we treat them as
such. But the end result should be the same, and reordered_opaque is
marked as partially deprecated (it's not clear whether it's really
deprecated).
When mpv is started with some video filters set (--vf is used), and
hardware decoding is requested, and hardware decoding would be possible,
but is prevented due to video filters that accept software formats only,
the fallback didn't work properly sometimes.
This fallback works rather violently: it tries to initialize the filter
chain, and if it fails it throws away the frame decoded using the
hardware, and retries with software. The case that didn't work was when
decoding the current packet didn't immediately lead to a new frame. Then
the filter chain wouldn't be reinitialized, and the playloop would stop
playback as soon as it encounters the error flag.
Fix this by resetting the filter error flag (back to "uninitialized"),
which is a rather violent, but somewhat working solution.
The fallback in general should perhaps be cleaned up later.
If the --fps option was given (MPOpts->force_fps), the demuxer FPS value
was overwritten with the forced value. This was fine, since the demuxer
value wasn't needed anymore. But with the recent changes not to write to
the demuxer stream headers, we don't want to do this anymore. So
maintain the (forced/updated) FPS value in dec_video->fps.
The removed code in loadfile.c is probably redundant, and an artifact
from past refactorings.
Note that sub.c will now always use the demuxer FPS value, instead of
the user override value. I think this is fine, because it used the
demuxer's video size values too. (And it's rare that these values are
used at all.)
Now the actual decoder doesn't need to care about this anymore, and it's
handled in generic code instead. This simplifies vd_lavc.c, and in
particular we don't need to detect format changes in the old way
anymore.
The only reason why these structs were dynamically allocated was to
avoid recursive includes in stheader.h, which is (or was) a very central
file included by almost all other files. (If a struct is referenced via
a pointer type only, it can be forward referenced, and the definition of
the struct is not needed.) Now that they're out of stheader.h, this
difference doesn't matter anymore, and the code can be simplified.
Also sneak in some sanity checks.
This used to be needed to access the generic stream header from the
specific headers, which in turn was needed because the decoders had
access only to the specific headers. This is not the case anymore, so
this can finally be removed again.
Also move the "format" field from the specific headers to sh_stream.
This is similar to the sh_audio commit.
This is mostly cosmetic in nature, except that it also adds automatical
freeing of the decoder driver's state struct (which was in
sh_video->context, now in dec_video->priv).
Also remove all the stheader.h fields that are not needed anymore.
This drops the --pp option, which was probably broken for a while. The
option automatically inserted the "pp" filter. The value passed to it
was ignored (which is probably broken, it always selected maximal
quality).
Inserting this filter can be done simply with --vf=pp, so this is not
needed anymore.
This means most code accessing this struct must now include hwdec.h
instead of dec_video.h. I just put it into dec_video.h at first because
I thought a separate file would be a waste, but it's more proper to do
it this way, as there are too many files which include dec_video.h only
to get the mp_hwdec_info definition.
vo_opengl always loads the hwdec backend lazily, so hwdec_request_api()
has to be called to possibly load it. This makes vf_vavpp work with
software decoding. (Hardware decoding loads the backend before the
filter is initialized, so this case is different.)
Also, the VFCTRL_GET_HWDEC_INFO call doesn't need to be checked. If it
fails, the info will be left blank.
The existing code tried to remove the "extra" profile flags for h264.
FF_PROFILE_H264_INTRA doesn't matter for us at all, because it's set
only for profiles the vdpau/vaapi APIs don't support.
The FF_PROFILE_H264_CONSTRAINED flag on the other hand is added to
H264_BASELINE, except that it makes the file a real subset of H264_MAIN
and H264_HIGH. Removing that flag would select the BASELINE profile,
which appears to be rarely supported by hardware decoders. This means we
accidentally rejected perfectly hardware decodable files. Use MAIN for
it instead.
(vaapi has explicit support for CONSTRAINED_BASELINE, but it seems to be
a new thing, and is not reported as supported where I tried. So don't
bother to check it, and do the same as on vdpau.)
See github issue #204.
This was supposed to handle preemption better. I still think the current
state isn't very nice, since the decoder can "accidentally" call the
previous render function after preemption (instead of calling the
reloaded function), so there might be issues. But all in all, this
dummy_render function is a bit confusing, and still not entirely
correct, so it's not worth it.
This removes "--hwdec=crystalhd".
I doubt anyone even tried to use this. But even if someone wants to
use it, the decoders can still be explicitly invoked with e.g.:
--vd=lavc:h264_crystalhd
The only advantage our special code provided was fallback to
software decoding. (But I'm not sure how the ffmpeg crystalhd
pseudo-decoder actually behaves.)
Removing this will allow some simplifications as soon as we don't need
vdpau_old.c anymore.
This uses vdpau OpenGL interop to convert a vdpau surface to a texture.
Note that this is a bit weak and primitive. Deinterlacing (or any other
form of vdpau postprocessing) is not supported. vo_opengl chroma scaling
and chroma sample position are not supported. Internally, the vdpau
video surfaces are converted to a RGBA surface first, because using the
video surfaces directly is too complicated. (These surfaces are always
split into separate fields, and the vo_opengl core expects progressive
frames or frames with weaved fields.)
Before the bitstream_buffers field was deprecated, you had to free it,
otherwise you would leak memory.
(Although vdpau.c uses a new API, they managed to introduce a new
deprecation this quickly. This is a complaint.)
This introduces a memory leak of 12 bytes per file on every file on some
_older_ libavcodec versions. This is minor enough that I don't care.
VA-API's OpenGL/GLX interop is pretty bad and perhaps slow (renders a
X11 pixmap into a FBO, and has to go over X11, probably involves one or
more copies), and this code serves more as an example, rather than for
serious use. On the other hand, this might be work much better than
vo_vaapi, even if slightly slower.
Most hardware decoding APIs provide some OpenGL interop. This allows
using vo_opengl, without having to read the video data back from GPU.
This requires adding a backend for each hardware decoding API. (Each
backend is an entry in gl_hwdec_vaglx[].) The backends expose video data
as a set of OpenGL textures.
Add infrastructure to support this. The next commit will add support for
VA-API.
The configure followed 5 different convetions of defines because the next guy
always wanted to introduce a new better way to uniform it[1]. For an
hypothetic feature 'hurr' you could have had:
* #define HAVE_HURR 1 / #undef HAVE_DURR
* #define HAVE_HURR / #undef HAVE_DURR
* #define CONFIG_HURR 1 / #undef CONFIG_DURR
* #define HAVE_HURR 1 / #define HAVE_DURR 0
* #define CONFIG_HURR 1 / #define CONFIG_DURR 0
All is now uniform and uses:
* #define HAVE_HURR 1
* #define HAVE_DURR 0
We like definining to 0 as opposed to `undef` bcause it can help spot typos
and is very helpful when doing big reorganizations in the code.
[1]: http://xkcd.com/927/ related
There are some Microsoft Windows symbols which are traditionally used by
the mplayer core, because it used to be convenient (avi was the big
format, using binary windows decoders made sense...). So these symbols
have the exact same definition as the Windows one, and if mplayer is
compiled on Windows, the symbols from windows.h are used.
This broke recently just because some files were shuffled around, and
the symbols defined in ms_hdr.h collided with windows.h ones. Since we
don't have windows binary decoders anymore, there's not the slightest
reason our symbols should have the same names. Rename them to reduce the
risk for collision, and to fix the recent regression.
Drop WAVEFORMATEXTENSIBLE, because it's mostly unused. ao_dsound defines
its own version if the windows headers don't define it, and ao_wasapi is
not available on systems where this symbol is missing.
Also reindent ms_hdr.h.
We had some code for checking profiles earlier, which was removed in
commits 2508f38 and adfb71b. These commits mentioned that (working) hw
decoding was sometimes prevented due to profile checking, but I can't
find the samples anymore that showed this behavior. Also, I changed my
opinion, and I think checking the profiles is something that should be
done for better fallback to software decoding behavior.
The checks roughly follow VLC's vdpau profile checks, although we do
not check codec levels. (VLC's profile checks aren't necessarily
completely correct, but they're a welcome help anyway.)
Add a --vd-lavc-check-hw-profile option, which skips the profile check.
We mixed the "old" AVFrame management functions (avcodec_alloc_frame,
avcodec_free_frame) with reference counting. This doesn't work
correctly; you must use av_frame_alloc and av_frame_free. Of course
ffmpeg doesn't warn us about the bad usage, but will just mess up
things silently. (Thanks a lot...)
While the alloc function seems to be 100% compatible, the free function
will do bad things, such as freeing memory that might still be
referenced by another frame. I didn't experience any actual bugs, but
maybe that was pure luck.
This commit adds the --force-window option, which will cause mpv always
to create a window when started. This can be useful when pretending that
mpv is a GUI application (which it isn't, but users pretend anyway), and
playing audio files would run mpv in the background without giving a
window to control it.
This doesn't actually create the window immediately: it only does so
only after initializing playback and when it is clear that there won't
be any actual video. This could be a problem when starting slow or
completely stuck network streams (mpv would remain frozen in the
background), or if video initialization somehow is stuck forever in
an in-between state (like when the decoder doesn't output a video
frame, but doesn't return an error either). Well, we can pretend only
so much that mpv is a GUI application.
Don't allocate a VAImage and a mp_image every time. VAImage are cached
in the surfaces themselves, and for mp_image an explicit pool is
created. The retry loop runs only once for each surface now.
This also makes use of vaDeriveImage() if possible.
Nothing really accesses it. Subtitle initialization actually does in a
somewhat meaningful way, but there container size is probably fine, as
subtitles were always initialized before the first video frame was
decoded.
Now writing -1 to the 'aspect' property resets the video to the auto
aspect ratio. Returning the aspect from the property becomes a bit more
complicated, because we still try to return the container aspect ratio
if no frame has been decoded yet.
This function would probably be useful in other places too.
I'm not sure why vd.c doesn't apply the aspect if it changes size by
less than 4 pixels. Maybe it's supposed to avoid ugly results with bad
scalers if the difference is too small to be noticed normally.
This code is actually quite inefficient: it reuses the (slow, simple)
screenshot code. It uses an inefficient method to read the image
(vaGetImage() instead of vaDeriveImage()), allocates new memory for
each frame that is read, and it tries all image formats again each
time.
Also, in my tests it always picked NV12 as image format, which is not
ideal if you actually want to filter the video, and vo_xv can't handle
this format without conversion either.
However, a user confirmed that it worked for him, so everything is fine.
This will allow GPU read-back with process_image.
We have to restructure how init_vo() works. Instead of initializing the
VO before process_image is called, rename init_vo() to
update_image_params(), and let it update the params only. Then we really
initialize the VO after process_image.
As a consequence of these changes, already decoded hw frames are
correctly unreferenced if creation of the filter chain fails. This
could trigger assertions on VO uninitialization, because it's not
allowed to reference hw frames past VO lifetime.
Merged from pull request #246 by xylosper. Minor cosmetic changes, some
adjustments (compatibility with older libva versions), and manpage
additions by wm4.
Signed-off-by: wm4 <wm4@nowhere>
In init_vo(), if sh->aspect is 0 or last_sample_aspect_ratio is set,
sh->aspect is overwritten. With software decoding fallback behaviour,
this makes the aspect ratio from container ignored since
last_sample_aspect_ratio is already set in first try with hardware
decoding.
The --deinterlace option does on playback start what the "deinterlace"
property normally does at runtime. You could do this before by using the
--vf option or by messing with the vo_vdpau default options, but this
new option is supposed to be a "foolproof" way.
The main motivation for adding this is so that the deinterlace property
can be restored when using the video resume functionality
(quit_watch_later command).
Implementation-wise, this is a bit messy. The video chain is rebuilt in
mpcodecs_reconfig_vo(), where we don't have access to MPContext, so the
usual mechanism for enabling deinterlacing can't be used. Further,
mpcodecs_reconfig_vo() is called by the video decoder, which doesn't
have access to MPContext either. Moving this call to mplayer.c isn't
currently possible either (see below). So we just do this before frames
are filtered, which potentially means setting the deinterlacing every
frame. Fortunately, setting deinterlacing is stable and idempotent, so
this is hopefully not a problem. We also add a counter that is
incremented on each reconfig to reduce the amount of additional work per
frame to nearly zero.
The reason we can't move mpcodecs_reconfig_vo() to mplayer.c is because
of hardware decoding: we need to check whether the video chain works
before we decide that we can use hardware decoding. Changing it so that
this can be decided in advance without building a filter chain sounds
like a good idea and should be done, but we aren't there yet.
Until now, video output levels (obscure feature, like using TV screens
that require RGB output in limited range, similar to YUY) still required
handling of VOCTRL_SET_YUV_COLORSPACE. Simplify this, and use the new
mp_image_params code. This gets rid of some code. VOCTRL_SET_YUV_COLORSPACE
is not needed at all anymore in VOs that use the reconfig callback. The
result of VOCTRL_GET_YUV_COLORSPACE is now used only used for the
colormatrix related properties (basically, for display on OSD). For
other VOs, VOCTRL_SET_YUV_COLORSPACE will be sent only once after config
instead of twice.
This affects VOs which just reuse the mp_image from draw_image() to
return screenshots. The aspect of these images is never different
from the aspect the screenshots should be, so there's no reason to
adjust the aspect in these cases.
Other VOs still need it in order to restore the original image
attributes.
This requires some changes to the video filter code to make sure that
the aspect in the passed mp_images is consistent.
The changes in mplayer.c and vd_lavc.c are (probably) not strictly
needed for this commit, but contribute to consistency.
Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec'
decoder in FFmpeg.
The Good: This new implementation has some advantages over the previous one:
- It works with Libav: vda_h264_dec never got into Libav since they prefer
client applications to use the hwaccel API.
- It is way more efficient: in my tests this implementation yields a
reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and
~65-75% compared to h264 software decoding. This is mainly because
`vo_corevideo` was adapted to perform direct rendering of the
`CVPixelBufferRefs` created by the Video Decode Acceleration API Framework.
The Bad:
- `vo_corevideo` is required to use VDA decoding acceleration.
- only works with versions of ffmpeg/libav new enough (needs reference
refcounting). That is FFmpeg 2.0+ and Libav's git master currently.
The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture.
One one end this makes the code simple since Apple's OpenGL implementation
actually supports this out of the box. It would be nice to support other
output image formats and choose the best format depending on the input, or at
least making it configurable. My tests indicate that CPU usage actually
increases with a 420p IMGFMT output which is not what I would have expected.
NOTE: There is a small memory leak with old versions of FFmpeg and with Libav
since the CVPixelBufferRef is not automatically released when the AVFrame is
deallocated. This can cause leaks inside libavcodec for decoded frames that
are discarded before mpv wraps them inside a refcounted mp_image (this only
happens on seeks).
For frames that enter mpv's refcounting facilities, this is not a problem
since we rewrap the CVPixelBufferRef in our mp_image that properly forwards
CVPixelBufferRetain/CvPixelBufferRelease calls to the underying
CVPixelBufferRef.
So, for FFmpeg use something more recent than `b3d63995` for Libav the patch
was posted to the dev ML in July and in review since, apparently, the proposed
fix is rather hacky.
Now the code does the same as the original MPlayer VAAPI patch, instead
of trying to map the profiles exactly.
See previous commit for justification and discussion.
Instead, do what MPlayer did all these years. It worked for them, so
there's probably no reason to change this.
Apparently fixes playback with some files, where the VDPAU decoder does
not formally support a profile, but decoding works with a more powerful
profile anyway.
Though note that MPlayer did this because it couldn't do it in a better
way (no decoder reported profiles available when creating the VDPAU
decoder), so it's not entirely clear whether this is a good idea. An
alterbative implementation might try to map the profiles exactly, and
do some fall backs if the exact profile is not available. But this
hack-solution works too.
Libav's <libavcodec/vdpau.h> header uses some libavocdec symbols without
forward-declaring them and without including the headers declaring them.
FFmpeg's header for this is fine.
About this issue, it would be better if the surfaces could be allocated
with the real size, and the vdpau video mixer could be created with that
size as well. That would be a bit hard, because the real surface size
had to be communicated to vdpau. So I'm going with this solution. vaapi
seems to be fine with either surface size, so there's hopefully no
problem.
Instead of passing AVFrame. This also moves the mysterious logic about
the size of the allocated image to common code, instead of duplicating
it everywhere.
This is based on the MPlayer VA API patches. To be exact it's based on
a very stripped down version of commit f1ad459a263f8537f6c from
git://gitorious.org/vaapi/mplayer.git.
This doesn't contain useless things like benchmarking hacks and the
demo code for GLX interop. Also, unlike in the original patch, decoding
and video output are split into separate source files (the separation
between decoding and display also makes pixel format hacks unnecessary).
On the other hand, some features not present in the original patch were
added, like screenshot support.
VA API is rather bad for actual video output. Dealing with older libva
versions or the completely broken vdpau backend doesn't help. OSD is
low quality and should be rather slow. In some cases, only either OSD
or subtitles can be shown at the same time (because OSD is drawn first,
OSD is prefered).
Also, libva can't decide whether it accepts straight or premultiplied
alpha for OSD sub-pictures: the vdpau backend seems to assume
premultiplied, while a native vaapi driver uses straight. So I picked
straight alpha. It doesn't matter much, because the blending code for
straight alpha I added to img_convert.c is probably buggy, and ASS
subtitles might be blended incorrectly.
Really good video output with VA API would probably use OpenGL and the
GL interop features, but at this point you might just use vo_opengl.
(Patches for making HW decoding with vo_opengl have a chance of being
accepted.)
Despite these issues, decoding seems to work ok. I still got tearing
on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also
tested with the vdpau vaapi wrapper on a nvidia system; however this
was rather broken. (Fortunately, there is no reason to use mpv's VAAPI
support over native VDPAU.)
Change how the HW decoding stuff is organized, the way it's initialized
in particular. Instead of duplicating the list of supported codecs for
hwaccel decoders, add a probe function which allows each decoder to
report whether it supports a given codec.
Add an "auto" choice to the --hwdec option, which automatically enables
hardware decoding if libavcodec and/or the VO supports it.
What mpv prints on the terminal changes a bit. Now it will just print
a single line whether hw decoding is used or not (and nothing at all if
no hw decoding at all was requested). The pretty violent fallback from
hw decoding to software decoding is still quite verbose and evil-looking
though.
CONFIG_VDPAU was just defined to 0, instead of undefined when vdpau was
unavailable. I blame the old mplayer code, which apparently can't have
consistent conventions.
Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This
file is named so because because it's written against the "old"
libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau").
Add support for the "new" libavcodec vdpau support. This was recently
added and replaces the "old" vdpau parts. (In fact, Libav is about to
deprecate and remove the "old" API without deprecation grace period,
so we have to support it now. Moreover, there will probably be no Libav
release which supports both, so the transition is even less smooth than
we could hope, and we have to support both the old and new API.)
Whether the old or new API is used is checked by a configure test: if
the new API is found, it is used, otherwise the old API is assumed.
Some details might be handled differently. Especially display preemption
is a bit problematic with the "new" libavcodec vdpau support: it wants
to keep a pointer to a specific vdpau API function (which can be driver
specific, because preemption might switch drivers). Also, surface IDs
are now directly stored in AVFrames (and mp_images), so they can't be
forced to VDP_INVALID_HANDLE on preemption. (This changes even with
older libavcodec versions, because mp_image always uses the newer
representation to make vo_vdpau.c simpler.)
Decoder initialization in the new code tries to deal with codec
profiles, while the old code always uses the highest profile per codec.
Surface allocation changes. Since the decoder won't call config() in
vo_vdpau.c on video size change anymore, we allow allocating surfaces
of arbitrary size instead of locking it to what the VO was configured.
The non-hwdec code also has slightly different allocation behavior now.
Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau
doesn't work anymore (a warning suggesting the --hwdec option is
printed instead).
See previous commits. This time, the lock is kept for rather long
times (e.g. for the duration of a big image memory allocation), but
this (probably) still doesn't matter at all.
This also affects legacy code only (pre-refcounting libavcodec).
In general, this warning can hint to actual bugs. We don't enable it
yet, because it would conflict with some unmerged code, and we should
check with clang too (this commit was done by testing with gcc).
There was a MPOpts fullscreen field, a mp_vo_opts.fs field, and
VOFLAG_FULLSCREEN. Remove all these and introduce a
mp_vo_opts.fullscreen flag instead.
When VOs receive VOCTRL_FULLSCREEN, they are supposed to set the
current fullscreen mode to the state in mp_vo_opts.fullscreen. They
also should do this implicitly on config().
VOs which are capable of doing so can update the mp_vo_opts.fullscreen
if the actual fullscreen mode changes (e.g. if the user uses the
window manager controls). If fullscreen mode switching fails, they
can also set mp_vo_opts.fullscreen to the actual state.
Note that the X11 backend does almost none of this, and it has a
private fs flag to store the fullscreen flag, instead of getting it
from the WM. (Possibly because it has to deal with broken WMs.)
The fullscreen option has to be checked on config() to deal with
the -fs option, especially with something like:
mpv --fs file1.mkv --{ --no-fs file2.mkv --}
(It should start in fullscreen mode, but go to windowed mode when
playing file2.mkv.)
Wayland changes by: Alexander Preisinger <alexander.preisinger@gmail.com>
Cocoa changes by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
Instead of handling colorspaces with VFCTRLs/VOCTRLs, make them part of
the normal video format negotiation. The colorspace is passed down like
other video params with config/reconfig calls.
Forcing colorspaces (via the --colormatrix options and properties) is
handled differently too: if it's changed, completely reinit the video
chain. This is slower and requires a precise seek to the same position
to perform an update, but it's simpler and less bug-prone. Considering
switching the colorspace at runtime by user-interaction is a rather
obscure feature, this is a good change.
The colorspace VFCTRLs and VOCTRLs are still kept. The VOs rely on it,
and would have to be changed to get rid of them. We'll do that later,
and convert them incrementally instead of in one go.
Note that controlling the output range now always works on VO level.
Basically, this means you can't get vf_scale to output full-range YUV
for whatever reason. If that is really wanted, it should be a vf_scale
option. the previous behavior didn't make too much sense anyway.
This commit fixes a few bugs (such as playing RGB video and converting
that to YUV with vf_scale - a recent commit broke this and forced the
VO to display YUV as RGB if possible), and might introduce some new
ones.
Guess the colorspace directly in mpcodecs_reconfig_vo(), instead of in
set_video_colorspace(). The difference is that the latter function just
makes the video filter chain (and VOs) force the detected colorspace,
and then throws it away, while the former is a bit more general and
central. Not really a big difference and it doesn't matter much in
practice, but it guarantees that there is no internal disagreement about
the colorspace.
Delete demux_avi, demux_asf, demux_mpg, demux_ts. libavformat does
better than them (except in rare corner cases), and the demuxers have
a bad influence on the rest of the code. Often they don't output
proper packets, and require additional audio and video parsing. Most
work only in --no-correct-pts mode.
Remove them to facilitate further cleanups.
Use the video decoder chroma location flags and render chroma locations
other than centered. Until now, we've always used the intuitive and
obvious centered chroma location, but H.264 uses something else.
FFmpeg provides a small overview in libavcodec/avcodec.h:
-----------
/**
* X X 3 4 X X are luma samples,
* 1 2 1-6 are possible chroma positions
* X X 5 6 X 0 is undefined/unknown position
*/
enum AVChromaLocation{
AVCHROMA_LOC_UNSPECIFIED = 0,
AVCHROMA_LOC_LEFT = 1, ///< mpeg2/4, h264 default
AVCHROMA_LOC_CENTER = 2, ///< mpeg1, jpeg, h263
AVCHROMA_LOC_TOPLEFT = 3, ///< DV
AVCHROMA_LOC_TOP = 4,
AVCHROMA_LOC_BOTTOMLEFT = 5,
AVCHROMA_LOC_BOTTOM = 6,
AVCHROMA_LOC_NB , ///< Not part of ABI
};
-----------
The visual difference is literally minimal, but since videophiles
apparently consider this detail as quality mark of a video renderer,
support it anyway. We don't bother with chroma locations other than
centered and left, though.
Not sure about correctness, but it's probably ok.
The filter chain and the video ouputs have config() functions. They are
strictly limited to transfering the video size and format. Other
parameters (like color levels) have to be transferred separately.
Improve upon this by introducing a separate set of reconfig() functions,
which use mp_image_params to carry format parameters. This struct
contains all image format related parameters from config(), plus
additional parameters such as colorspace.
Change vf_rotate to use it, as well as vo_opengl. vf_rotate is just
an example/test case, but vo_opengl will need it later.
The intention is also to get rid of VOCTRL_SET_YUV_COLORSPACE. This
information is now handed to the VOs via reconfig(). The getter,
VOCTRL_GET_YUV_COLORSPACE, will still be needed though.
Audio and video had their own (very similar) functions to initialize an
AVPacket (ffmpeg's packet struct) from a demux_packet (mplayer's packet
struct). Add a common function for these.
Also use this function for sd_lavc_conv. This is actually a functional
change, as some libavfilter subtitle demuxers add weird out-of-band
stuff as side-data.
Most of these are rather questionable, the rest you rarely need to set
manually. You still can set all of them with -lavdopts-o (because
libavcodec has AVOptions for them).
libavcodec generally shouldn't have this problem anymore (if libavcodec
ever had it). All other video decoders are gone. In any case, if this
commit actually causes regressions, these are libavcodec bugs and should
be fixed there instead.
Add the "vf" command, which allows changing the video filter chain at
runtime. For example, the 'y' key could be bound to toggle deinterlacing
by adding 'y vf toggle yadif' to the input.conf.
Reconfiguring the video filter chain normally resets the VO, so that it
will be "stuck" until a new video frame is rendered. To mitigate this, a
seek to the current position is issued when the filter chain is changed.
This is done only if playback is paused, because normal playback will
show an actual new frame quickly enough.
If vdpau hardware decoding is used, filter insertion (whether it fails
or not) will break the video for a while. This is because vo_vdpau
resets decoding related things on vo_config().
This tried to use ctx->pic (last decoded AVFrame) for the frame bounds.
However, if av_frame_unref() on the AVFrame is called, the function will
reset _all_ AVFrame fields, even those which are not involved with
memory management. As result, mpcodecs_config_vo() was called with 0
width/height, which made the function exit early, instead of
reconfiguring the filter chain.
Go back to using mpcodecs_config_vo() directly. (That's what it did
before this VDCTRL was originally introduced; the original reason for
it disappeared.)
This changes the code so that it does the same as MPlayer, mplayer2
and mpv before ref-counted AVFrame. The problem is that get_buffer2
is called with aligned frame dimensions, while get_buffer didn't. This
breaks the mpv video frame size change detection.
This allows using the vdpau decoders with -vd without having to use
the -hwdec switch (basically like in mplayer).
Note that this way of selecting the hardware decoder is still
deprecated. libavcodec went away from adding special decoder entries
for hardware decoding, and instead makes use of the "hwaccel"
architecture, where hardware decoders use the same decoder names as
the software decoders. The old vdpau special decoders will probably
be deprecated and removed in the future.
libavcodec decoder initialization failure caused a segfault, because it
wasn't properly reported back in init().
Also remove the return value from init_avctx(), which actually makes
things simpler. Instead, ctx->avctx can be checked to see whether
initialization was ok.
Makes the code a bit simpler to follow, at least in the "modern"
decoding path (update_video_nocorrect_pts() is used with old demuxers,
which don't return proper packets and need further parsing, so this code
looks less simple now).
Removes almost every global variabel in vo.h and puts them in a special struct
in MPOpts for video output related options.
Also we completly remove the options/globals pts and refresh rate because
they were unused.
OPT_BASE_STRUCT defines which struct the OPT_ macros (like OPT_INT etc.)
reference implicitly, since these macros take struct member names but no
struct type. Normally, only cfg-mplayer.h should need this, and other
places shouldn't be bothered with having to #undef it.
(Some files, like demux_lavf.c, still store their options in MPOpts. In
the long term, this should be removed, and handled like e.g. with VO
suboptions instead.)
This allowed making the player switch the monitor video mode when
creating the video window. This was a questionable feature, and with
today's LCD screens certainly not useful anymore. Switching to a random
video mode (going by video width/height) doesn't sound too useful
either.
I'm not sure about the win32 implementation, but the X part had several
bugs. Even in mplayer-svn (where x11_common.c hasn't been receiving any
larger changes for a long time), this code is buggy and doesn't do the
right thing anyway. (And what the hell _did_ it do when using multiple
physical monitors?)
If you really want this, write a shell script that calls xrandr before
and after calling mpv.
vo_sdl still can do mode switching, because SDL has native support for
it, and using it is trivial. Add a new sub-option for this.
The string is deallocated by the callee after initialization, so
fallback at runtime passes a deallocated string to libavcodec, which
results in random crashes. Regression introduced by commit 4d016a9.
Instead of putting codec header data into WAVEFORMATEX and
BITMAPINFOHEADER, pass it directly via AVCodecContext. To do this, we
add mp_copy_lav_codec_headers(), which copies the codec header data
from one AVCodecContext to another (originally, the plan was to use
avcodec_copy_context() for this, but it looks like this would turn
decoder initialization into an even worse mess).
Get rid of the silly CodecID <-> codec_tag mapping. This was originally
needed for codecs.conf: codec tags were used to identify codecs, but
libavformat didn't always return useful codec tags (different file
formats can have different, overlapping tag numbers). Since we don't
go through WAVEFORMATEX etc. and pass all header data directly via
AVCodecContext, we can be absolutely sure that the codec tag mapping is
not needed anymore.
Note that this also destroys the "standard" MPlayer method of exporting
codec header data. WAVEFORMATEX and BITMAPINFOHEADER made sure that
other non-libavcodec decoders could be initialized. However, all these
decoders have been removed, so this is just cruft full of old hacks that
are not needed anymore. There's still ad_spdif and ad_mpg123, bu neither
of these need codec header data. Should we ever add non-libavcodec
decoders, better data structures without the past hacks could be added
to export the headers.
Rearrange some code to make it easier readable. Remove some dead code,
and stop printing AVI headers in demux_lavf. (These are not actual AVI
headers, just for internal use.)
There should be no functional changes, other than reducing output in
verbose mode.
Use codec names instead of FourCCs to identify codecs. Rewrite how
codecs are selected and initialized. Now each decoder exports a list
of decoders (and the codec it supports) via add_decoders(). The order
matters, and the first decoder for a given decoder is preferred over
the other decoders. E.g. all ad_mpg123 decoders are preferred over
ad_lavc, because it comes first in the mpcodecs_ad_drivers array.
Likewise, decoders within ad_lavc that are enumerated first by
libavcodec (using av_codec_next()) are preferred. (This is actually
critical to select h264 software decoding by default instead of vdpau.
libavcodec and ffmpeg/avconv use the same method to select decoders by
default, so we hope this is sane.)
The codec names follow libavcodec's codec names as defined by
AVCodecDescriptor.name (see libavcodec/codec_desc.c). Some decoders
have names different from the canonical codec name. The AVCodecDescriptor
API is relatively new, so we need a compatibility layer for older
libavcodec versions for codec names that are referenced internally,
and which are different from the decoder name. (Add a configure check
for that, because checking versions is getting way too messy.)
demux/codec_tags.c is generated from the former codecs.conf (minus
"special" decoders like vdpau, and excluding the mappings that are the
same as the mappings libavformat's exported RIFF tables). It contains
all the mappings from FourCCs to codec name. This is needed for
demux_mkv, demux_mpg, demux_avi and demux_asf. demux_lavf will set the
codec as determined by libavformat, while the other demuxers have to do
this on their own, using the mp_set_audio/video_codec_from_tag()
functions. Note that the sh_audio/video->format members don't uniquely
identify the codec anymore, and sh->codec takes over this role.
Replace the --ac/--vc/--afm/--vfm with new --vd/--ad options, which
provide cover the functionality of the removed switched.
Note: there's no CODECS_FLAG_FLIP flag anymore. This means some obscure
container/video combinations (e.g. the sample Film_200_zygo_pro.mov)
are played flipped. ffplay/avplay doesn't handle this properly either,
so we don't care and blame ffmeg/libav instead.
OPT_MAKE_FLAGS() used to emit two options (one with "no" prefixed),
but that has been long removed by special casing flag options in the
option parser. OPT_FLAG_ON() used to imply that there's no "no-"
prefixed option, but this hasn't been the case for a while either.
(Conceptually, it has been replaced by OPT_FLAG_STORE().)
Remove OPT_FLAG_OFF, which was unused.
The decoder returns with AVFrame.format not correctly set for some h264
files (strangely only some). We have to access AVCodecContext.pix_fmt
instead. On newer libavcodec versions, it's the other way around: the
AVCodecContext.pix_fmt may be incorrectly set on pixel format changes,
and you are supposed to use AVFrame.format.
The same problem probably exists on older ffmpeg versions too.
Now the calculations of the final display size are done after the filter
chain. This makes the difference between display aspect ratio and window
size a bit more clear, especially in the -xy case.
With an empty filter chain, the behavior of the options should be the
same, except that they don't affect vo_image and vo_lavc anymore.
This printed per-frame statistics into a file, like bitrate or frame
type. Not very useful and accesses obscure AVCodecContext fields
(danger of deprecation/breakage), so get rid of it.
This was a "broken misfeature" according to Libav developers. It wasn't
implemented for modern codecs (like h264), and has been removed from
Libav a while ago (the AVCodecContext field has been marked as
deprecated and its value is ignored). FFmpeg still supports it, but
isn't much useful due to aforementioned reasons.
Remove the code to enable it.
This is more correct. Not all frame specific fields are in AVFrame,
such as colorspace and color_range, and these are still queried
through the decoder context.
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
According to DOCS/OUTDATED-tech/colorspaces.txt, the following formats
are supposed to be palettized:
IMGFMT_BGR8
IMGFMT_RGB8,
IMGFMT_BGR4_CHAR
IMGFMT_RGB4_CHAR
IMGFMT_BGR4
IMGFMT_RGB4
Of these, only BGR8 and RGB8 are actually treated as palettized in some
way. ffmpeg has only one palettized format (AV_PIX_FMT_PAL8), and
IMGFMT_BGR8 was inconsistently mapped to packed non-palettized RGB
formats too (AV_PIX_FMT_BGR8). Moreover, vf_scale.c contained messy
hacks to generate a palette when AV_PIX_FMT_BGR8 is output. (libswscale
does not support AV_PIX_FMT_PAL8 output in the first place.)
Get rid of all of this, and introduce IMGFMT_PAL8, which directly maps
to AV_PIX_FMT_PAL8. Remove the palette creation code from vf_scale.c.
IMGFMT_BGR8 maps to AV_PIX_FMT_RGB8 (don't ask me why it's swapped),
without any palette use. Enabling it in vo_x11 or using it as vf_scale
input seems to give correct results.
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
This allows avoiding a copy.
The logic about using the AVFrame.reference field if buffer hints are
not available is similar to mplayer-svn's direct rendering code. This is
needed because h264 decoding doesn't set buffer hints.
<wm4> michaelni: couldn't it set buffer hints from AVFrame.reference then?
<michaelni> i see no reason ATM why that wouldnt be possible
OK...
Change the entire filter API to use reference counted images instead
of vf_get_image().
Remove filter "direct rendering". This was useful for vf_expand and (in
rare cases) vf_sub: DR allowed these filters to pass a cropped image to
the filters before them. Then, on filtering, the image was "uncropped",
so that black bars could be added around the image without copying. This
means that in some cases, vf_expand will be slower (-vf gradfun,expand
for example).
Note that another form of DR used for in-place filters has been replaced
by simpler logic. Instead of trying to do DR, filters can check if the
image is writeable (with mp_image_is_writeable()), and do true in-place
if that's the case. This affects filters like vf_gradfun and vf_sub.
Everything has to support strides now. If something doesn't, making a
copy of the image data is required.
Setting the size of a mp_image must be done with mp_image_set_size()
now. Do this to guarantee that the redundant fields (like chroma_width)
are updated consistently. Replacing the redundant fields by function
calls would probably be better, but there are too many uses of them,
and is a bit less convenient.
Most code actually called mp_image_setfmt(), which did this as well.
This commit just makes things a bit more explicit.
Warning: the video filter chain still sets up mp_images manually,
and vf_get_image() is not updated.
Note that if the codec doesn't support DR1, the image has to be copied.
There is no other way to guarantee that the image will be valid after
decoding the next image.
The only important codec that doesn't support DR1 yet is rawvideo. It's
likely that ffmpeg/Libav will fix this at some time. For now, this
decoder uses an evil hack and puts pointers to the packet data into the
returned frame. This means the image will actually get invalid as soon
as the corresponding video packet is free'd, so copying the image is the
only reasonable thing anyway.
Replace libavcodec's native buffer allocation with code taken from
ffplay/ffmpeg's libavfilter support. The code in lavc_dr1.c is directly
copied from cmdutils.c. Note that this is quite arcane code, which
contains some workarounds for decoder bugs and the like. This is not
really a maintainance burden, since fixes from ffmpeg can be directly
applied to the code in lavc_dr1.c.
It's unknown why libavcodec doesn't provide such a function directly.
avcodec_default_get_buffer() can't be reused for various reasons.
There's some hope that the work known as The Evil Plan [1] will make
custom get_buffer implementations unneeded.
The DR1 support as of this commit does nothing. A future commit will
use it to implement ref-counting for mp_image (similar to how AVFrame
will be ref-counted with The Evil Plan.)
[1] http://lists.libav.org/pipermail/libav-devel/2012-December/039781.html
Deprecate the hardware specific video codec entries (like ffh264vdpau).
Replace them with the --hwdec switch, which requests that a specific
hardware decoding API should be used. The codecs.conf entries will be
removed at a later time, but for now they are useful for testing and
compatibility.
Instead of --vc=ffh264vdpau, --hwdec=vdpau should be used.
Add a fallback if hardware decoding fails. Most hardware decoders
(including vdpau) support only a subset of h264, and having such a
fallback is supposed to enable a better user experience.
This was buggy and didn't even work in the simplest cases. It was
disabled when multithreading was used, and always disabled for h264.
A better alternative (reference counting) will be added later.
Hardware decoding still uses the ffmpeg DR mechanism, but has been
decoupled from mpv's DR in the previous commit.
vdpau hardware decoding used the DR (direct rendering) path to let the
decoder query a surface from the VO. Special-case the HW decoding path
instead, to make it separate from DR.
This mutated the variable for the thread count option
(lavc_param->threads) on decoder initialization. This didn't have any
practical relevance, unless formats supporting hardware video decoding
and other formats were played in the same mpv instance. In this case,
hardware decoding would set threads to 1, and all files played after
that would use only one thread as well even with software decoding.
Remove XvMC leftover (CODEC_CAP_HWACCEL).
Simplify the decoder pixel format handling by making it handle only
the case vd_lavc needs: a video stream always decodes to a single
pixel format.
Remove the handling for multiple pixel formats, and remove the
codecs.conf pixel format declarations that are left.
Remove the handling of "ambiguous" pixel formats like YV12 vs. I420 (via
VDCTRL_QUERY_FORMAT etc.). This is only a problem if the video chain
supports I420, but not YV12, which doesn't seem to be the case anywhere,
and in fact would not have any advantage.
Make the "flip" flag a global per-codec flag, rather than a pixel format
specific flag. (Some ffmpeg decoders still return a flipped image, so
this has to be done manually.) Also fix handling of the flip operation:
do not overwrite the global flip option, and make the --flip option
invert the codec flip option rather than overriding it.
Slices allowed filtering or drawing video in horizontal bands or
blocks. This allowed working on the video in smaller units. In theory,
this could bring a performance win by lowering cache pressure, as you
didn't have to keep the whole video frame in cache while filtering,
only the slice.
In practice, the slice code path was barely used for the following
reasons:
- Multithreaded decoding with ffmpeg didn't use slices. The ffmpeg
slice callback was disabled, because it can be called from another
thread, and the mplayer video chain is not thread-safe.
- There was nothing that would turn "full" images into appropriate
slices, so slices were rarely used.
- Most filters didn't actually support slices.
On the other hand, supporting slices lead to code duplication and more
complex code in general. I made some experiments and didn't find any
actual measurable performance improvements when using slices. Even
ffmpeg removed slices based filtering from libavfilter in favor of
simpler code.
The most broken thing about the slices code path is that slices can't
be queued, like it is done for images in vo.c.
For some reason, libavcodec abuses the slices rendering code path for
hardware decoding: in that case, the only purpose of the draw callback
is to pass a vdpau video surface object to video output. (It is unclear
to me why this had to use the slices code, instead of just returning an
AVFrame with the required vdpau state.)
Make this code separate within mpv, so that the internal slices code
path is not used for hardware decoding. Pass the vdpau state with
VOCTRL_HWDEC_DECODER_RENDER instead.
Remove the mencoder specific VOCTRLs.
The -zoom option enabled scaling with vo_x11. Remove the -zoom option,
and make its behavior default. Since vo_x11 has to use libswscale for
colorspace conversion anyway, which doesn't do actual extra scaling when
vo_x11 is run in windowed mode, there should be no speed difference with
this change.
The code removed from vf_scale attempted to scale the video to d_width/
d_height, which matters for anamorphic video and the --xy option only.
vo_x11 can handle these natively. The only case for which the removed
vf_scale code could matter is encoding with vo_lavc, but since that
didn't set VOFLAG_SWSCALE, nothing actually changes.
Finish renaming directories and moving files. Adjust all include
statements to make the previous commit compile.
The two commits are separate, because git is bad at tracking renames
and content changes at the same time.
Also take this as an opportunity to remove the separation between
"common" and "mplayer" sources in the Makefile. ("common" used to be
shared between mplayer and mencoder.)
Tis drops the silly lib prefixes, and attempts to organize the tree in
a more logical way. Make the top-level directory less cluttered as
well.
Renames the following directories:
libaf -> audio/filter
libao2 -> audio/out
libvo -> video/out
libmpdemux -> demux
Split libmpcodecs:
vf* -> video/filter
vd*, dec_video.* -> video/decode
mp_image*, img_format*, ... -> video/
ad*, dec_audio.* -> audio/decode
libaf/format.* is moved to audio/ - this is similar to how mp_image.*
is located in video/.
Move most top-level .c/.h files to core. (talloc.c/.h is left on top-
level, because it's external.) Park some of the more annoying files
in compat/. Some of these are relicts from the time mplayer used
ffmpeg internals.
sub/ is not split, because it's too much of a mess (subtitle code is
mixed with OSD display and rendering).
Maybe the organization of core is not ideal: it mixes playback core
(like mplayer.c) and utility helpers (like bstr.c/h). Should the need
arise, the playback core will be moved somewhere else, while core
contains all helper and common code.