These consult the vertical resolution, matching against 576 for
PAL and 480/486 for NTSC. The documentation has also been updated.
Signed-off-by: wm4 <wm4@nowhere>
With this change, XYZ input is directly converted to the output
colorspace wherever possible, and to the colorspace specified by the
tags and/or --primaries option, otherwise.
This commit also restructures some of the CMS code in gl_video.c to
hopefully make it clearer which decision is being done where and why.
This add support for reading primary information from lavc, categorized
into BT.601-525, BT.601-625, BT.709 and BT.2020; and passes it on to the
vo. In vo_opengl, we always generate the 3dlut against the wider BT.2020
and transform our source into this colorspace in the shader.
Make sure every video filter has valid parameters for input and output.
(This also ensures we don't take possibly invalid decoder output, or
feed invalid decodr/filter output to VOs.)
Also, the updated image size check now (almost) works like the
corresponding check in FFmpeg.
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
Commit 5e4e248 added a mp_image_params field to mp_image, and moved many
parameters to that struct. display_w/h was left redundant with
mp_image_params.d_w/d_h. These fields were supposed to be always in
sync, but it seems some code forgot to do this correctly, such as
vf_fix_img_params() or mp_image_copy_attributes(). This led to the
problem in github issue #756, because display_w/_h could become
incorrect.
It turns out that most code didn't use the old fields anyway. Just
remove them. Note that mp_image_params.d_w/d_h are supposed to be always
valid, so the additional checks for 0 shouldn't be needed. Remove these
checks as well.
Fixes#756.
qscale export has been completely removed from Libav 10, and FFmpeg has
an alternative API, so this code does nothing and only causes
deprecation warnings on Libav.
This is pretty obscure, so it didn't matter much. It still breaks
switching output levels at runtime, because the video output is not
reinitialized with the new params.
Image formats used to be FourCCs, so unsigned int was better. But now
it's annoying and the only difference is that unsigned int is more to
type than int.
Larger sizes can introduce overflows, depending on the image format. In
the worst case, something larger than 16000x16000 with 8 bytes per pixel
will overflow 31 bits.
Maybe there should be a proper failure path instead of a hard crash, but
not yet. I imagine anything that sets a higher image size than a known
working size should be forced to call a function to check the size (much
like in ffmpeg/libavutil).
We got a crash in libavutil when encoding with Y8 (GRAY8). The reason
was that libavutil was copying an Y8 image allocated by us, and expected
a palette. This is because GRAY8 is a PSEUDOPAL format. It's not clear
what PSEUDOPAL means, and it makes literally no sense at all. However,
it does expect a palette allocated for some formats that are not
paletted, and libavutil crashed when trying to access the non-existent
palette.
pthreads should be available anywhere. Even if not, for environment
without threads a pthread wrapper could be provided that can't actually
start threads, thus disabling features that require threads.
Make pthreads mandatory in order to simplify build dependencies and to
reduce ifdeffery. (Admittedly, there wasn't much complexity, but maybe
we will use pthreads more in the future, and then it'd become a real
bother.)
Change talloc destructor so that they can never signal failure, and
don't return a status code. This makes our talloc copy even more
incompatible to upstream talloc, but on the other hand this is
preparation for getting rid of talloc entirely.
(The talloc replacement in the next commit won't allow the talloc_free
equivalent to fail, and the destructor return value would be useless.
But I don't want to change any mpv code either; the idea is that the
talloc replacement commit can be reverted for some time in order to
test whether the talloc replacement introduced a regression.)
Allocate textures big enough to include the bottom/right borders (so the
chroma texture sizes are rounded up instead of down). Make the texture
large enough to include the additional luma border. Conceptually, we
pretend that the video frame is fully aligned, and then crop away the
unwanted borders. Filtering (even just bilinear) will access the
borders anyway, so it's possible that we might need to switch to
"harder" cropping instead, but at least pixels not close to the
border should be displayed correctly now.
Add a comment to mp_image.c about this luma border. These semantics are
kind of subtle, and the image allocation code handle this in a subtle
way too, so it's better to document this explicitly. The libavutil
image allocation code does similar things.
This hasn't been done yet, because pthreads is still an optional
dependency, so this is a bit annoying. Now doing it anyway, because
maybe we will need this capability in the future.
We keep it as simple as possible. We (probably) don't need anything
more sophisticated, and keeping it simple avoids introducing weird
bugs. So, no atomic instructions, no fine grained locks, no cleverness.
This change affects vf_lavfi. Until recently, libavfilter was not
colorspace aware at all. This changed with the addition of colorspace
fields to AVFrame. libavfilter's vf_scale picks them up (as of recent
ffmpeg git). Since this support is still kind of wonky and not part of
the normal format negotiation, this won't set the correct output
colorspace, though.
Not adding a separate test for HAVE_AVFRAME_COLORSPACE. This is slightly
unclean, but on the other hand adding an explicit test seems like a
waste of effort.
It appears the API requires you to cover all plane data with AVBuffers
(that is, one AVBuffer per plane in the most general case), because
certain code can make certain assumptions about this. (Insert rant
about how this is barely useful and increases complexity and potential
bugs.) I don't know any cases where the current code actually fails,
but we want to follow the API, so do it anyway.
Note that we don't really know whether or not planes are from a single
memory allocation, so we have to assume the most general case and create
an AVBuffer for each plane. We simply assume that the data is padded to
the full stride in the last image line. All these extra dummy references
are stupid, but the code might become much simpler once we only support
libavcodec versions with refcounting and can use AVFrame directly.
This splits the monolithic mp_image_swscale() function into a bunch of
functions and a context struct. This means it's possible to set
arbitrary parameters (e.g. even obscure ones without getting in the
way), and you don't have to create the context on every call.
This code is preparation for removing duplicated libswscale API usage
from other parts of the code.
Instead of handling colorspaces with VFCTRLs/VOCTRLs, make them part of
the normal video format negotiation. The colorspace is passed down like
other video params with config/reconfig calls.
Forcing colorspaces (via the --colormatrix options and properties) is
handled differently too: if it's changed, completely reinit the video
chain. This is slower and requires a precise seek to the same position
to perform an update, but it's simpler and less bug-prone. Considering
switching the colorspace at runtime by user-interaction is a rather
obscure feature, this is a good change.
The colorspace VFCTRLs and VOCTRLs are still kept. The VOs rely on it,
and would have to be changed to get rid of them. We'll do that later,
and convert them incrementally instead of in one go.
Note that controlling the output range now always works on VO level.
Basically, this means you can't get vf_scale to output full-range YUV
for whatever reason. If that is really wanted, it should be a vf_scale
option. the previous behavior didn't make too much sense anyway.
This commit fixes a few bugs (such as playing RGB video and converting
that to YUV with vf_scale - a recent commit broke this and forced the
VO to display YUV as RGB if possible), and might introduce some new
ones.
Some functions (avcol_spc_to_mp_csp() etc.) used libavcodec enum types
as parameters. Remove these in order to get rid of the avcodec.h
include statement. This prevents that avcodec.h is recursively
included by dozens of files. Fix mp_image.c, which used the header
without explicitly including avcodec.h.
Use the video decoder chroma location flags and render chroma locations
other than centered. Until now, we've always used the intuitive and
obvious centered chroma location, but H.264 uses something else.
FFmpeg provides a small overview in libavcodec/avcodec.h:
-----------
/**
* X X 3 4 X X are luma samples,
* 1 2 1-6 are possible chroma positions
* X X 5 6 X 0 is undefined/unknown position
*/
enum AVChromaLocation{
AVCHROMA_LOC_UNSPECIFIED = 0,
AVCHROMA_LOC_LEFT = 1, ///< mpeg2/4, h264 default
AVCHROMA_LOC_CENTER = 2, ///< mpeg1, jpeg, h263
AVCHROMA_LOC_TOPLEFT = 3, ///< DV
AVCHROMA_LOC_TOP = 4,
AVCHROMA_LOC_BOTTOMLEFT = 5,
AVCHROMA_LOC_BOTTOM = 6,
AVCHROMA_LOC_NB , ///< Not part of ABI
};
-----------
The visual difference is literally minimal, but since videophiles
apparently consider this detail as quality mark of a video renderer,
support it anyway. We don't bother with chroma locations other than
centered and left, though.
Not sure about correctness, but it's probably ok.
The filter chain and the video ouputs have config() functions. They are
strictly limited to transfering the video size and format. Other
parameters (like color levels) have to be transferred separately.
Improve upon this by introducing a separate set of reconfig() functions,
which use mp_image_params to carry format parameters. This struct
contains all image format related parameters from config(), plus
additional parameters such as colorspace.
Change vf_rotate to use it, as well as vo_opengl. vf_rotate is just
an example/test case, but vo_opengl will need it later.
The intention is also to get rid of VOCTRL_SET_YUV_COLORSPACE. This
information is now handed to the VOs via reconfig(). The getter,
VOCTRL_GET_YUV_COLORSPACE, will still be needed though.
Normally, we assume that IMGFMT_PAL8 always has a palette allocated
in plane 1. But there may be corner cases in ffmpeg where it doesn't
(namely pseudo-pal stuff).
vo_vdpau actually reads past the image allocation when displaying a
non-mod 2 420p image. The vdpau API specifies that VdpVideoSurfacePutBitsYCbCr()
requires a height that is a multiple of 4, and surface allocations are
automatically rounded.
So allocate video images with rounded height. libavutil does the same,
so images coming directly from the decoder or from libavfilter are no
problem. (libavutil does this alginment explicitly, not just because the
decoded image size is aligned to macroblocks.)
This only matters for those who want to use vf_pp. The old API is marked
as deprecated, and doesn't work on Libav. It was broken on FFmpeg, but
has recently started working again - the fields in question were not un-
deprecated though. Instead you're supposed to use a new API, which does
exactly the same thing (what...?).
Also don't pass the QP table with mp_image_copy_attributes() - it
probably does more harm than it's useful.
By the way, with -vf=dlopen=TOOLS/vf_dlopen/showqscale.so, it appears
the table as output by recent FFmpeg is offset by 1 macroblock in X
direction and 2 macroblocks in Y direction, which most likely will
interfere with normal vf_pp operation. However, this is not my problem.
The only real reason for this commit is that we can finally get rid of
all libav* related deprecation warnings. (Though they are constantly
deprecating APIs, so this will not last long.)
This field contained the "average" bit depth per pixel. It serves no
purpose anymore. Remove it.
Only vo_opengl_old still uses this in order to allocate a buffer that is
shared between all planes.
This did random things with some image formats, including 10 bit
formats. Fixes the mp_image_clear() function too.
This still has some caveats:
- doesn't clear alpha to opaque (too hard with packed RGB, and is rarely
needed)
- sets luma to 0 for mpeg-range YUV, instead of the lowest possible
value (e.g. 16 for 8 bit)
Remove the strange things the old mp_image_setfmt() code did to the
image format parameters. This includes setting chroma shift to 31 for
gray (Y8) formats and more.
Y8 + vo_opengl_old didn't actually work for unknown reasons (regression
in this branch). Fix this. The difference is that Y8 is now interpreted
as gray RGB (LUMINANCE texture) instead of involving YUV (and levels)
conversion.
Get rid of distinguishing RGB and BGR. There wasn't really any good
reason for this.
Remove mp_get_chroma_shift() and IMGFMT_IS_YUVP16*(). mp_imgfmt_desc
gives the same information and more.
These functions weren't specific to video filters and were misplaced
in vf.c. Move them to mp_image.c.
Fix the big endian test in vf_mpi_clear/mp_image_clear, which has been
messed up in 74df1d.
According to DOCS/OUTDATED-tech/colorspaces.txt, the following formats
are supposed to be palettized:
IMGFMT_BGR8
IMGFMT_RGB8,
IMGFMT_BGR4_CHAR
IMGFMT_RGB4_CHAR
IMGFMT_BGR4
IMGFMT_RGB4
Of these, only BGR8 and RGB8 are actually treated as palettized in some
way. ffmpeg has only one palettized format (AV_PIX_FMT_PAL8), and
IMGFMT_BGR8 was inconsistently mapped to packed non-palettized RGB
formats too (AV_PIX_FMT_BGR8). Moreover, vf_scale.c contained messy
hacks to generate a palette when AV_PIX_FMT_BGR8 is output. (libswscale
does not support AV_PIX_FMT_PAL8 output in the first place.)
Get rid of all of this, and introduce IMGFMT_PAL8, which directly maps
to AV_PIX_FMT_PAL8. Remove the palette creation code from vf_scale.c.
IMGFMT_BGR8 maps to AV_PIX_FMT_RGB8 (don't ask me why it's swapped),
without any palette use. Enabling it in vo_x11 or using it as vf_scale
input seems to give correct results.
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
Replace the internal pixel format stuff with code that queries the
libavutil list of pixel format descriptors.
Trying to map IMGFMT_IS_RGB() etc. turned out extremely hacky.
Remove mp_image.width/height. The w/h members are the ones to use.
width/height were used internally by vf_get_image(), and sometimes for
other purposes.
Remove some image flags, most of which are now useless or completely
unused. This includes VFCAP_ACCEPT_STRIDE: the vf_expand insertion in
vf.c does nothing.
Remove some other unused mp_image fields.
Some rather messy changes in vo_opengl[_old] to get rid of legacy
mp_image flags and fields. This is left from when vo_gl supported DR.
Setting the size of a mp_image must be done with mp_image_set_size()
now. Do this to guarantee that the redundant fields (like chroma_width)
are updated consistently. Replacing the redundant fields by function
calls would probably be better, but there are too many uses of them,
and is a bit less convenient.
Most code actually called mp_image_setfmt(), which did this as well.
This commit just makes things a bit more explicit.
Warning: the video filter chain still sets up mp_images manually,
and vf_get_image() is not updated.
Based on a patch by qyot27. Add the missing parts in mp_get_chroma_shift(),
which allow allocation of such images, and which make vo_opengl
automatically accept the new formats. Change the IMGFMT_IS_YUVP16_LE/BE
macros to properly report IMGFMT_444P14 as supported: this pixel format
has the highest numerical bit width identifier (0x55), which is not
covered by the mask ~0xfc. Remove 1 bit from the mask (makes it 0xf8) so
that IMGFMT_IS_YUVP16(IMGFMT_444P14) is 1. This is slightly risky, as
the organization of the image format IDs (actually FourCCs + mplayer
internal IDs) is messy at best, but it should be ok.
By "design", mplayer normally allocates aligned images only inside the
filter chain, via the vf_get_image() function. This function pads the
width of the requested image if a stride is allowed, sets that new width
before calling mp_image_alloc_planes().
However, newer code wants aligned images as well (basically to satisfy
libswscale). This affects all uses of alloc_mpi(). To get aligned
strides, simply change alloc_mpi() to request an aligned width.
Remove the old hack in mp_image_alloc_planes(), which special cases some
image formats to be allocated with aligned strides.
This is a temporary hack until mp_image_alloc_planes() is revised.
As pointed out in commit ed01df, the quality loss due to frequent
conversion between RGB and YUV is too much when drawing OSD and
subtitles.
Fix this by staying in the same colorspace when drawing subtitles.
Render directly to RGB, without converting to YUV first.
The bad thing about packed RGB is that there are many pixel formats,
which would all require special code for blending. It's also completely
incompatible to planar YUV. Use planar RGB instead, which allows us to
reuse all code originally written for planar YUV. The only thing that
needs to be changed is the color conversion in the libass case. (In
exchange for simpler code, the image has to be copied, but this is
still much better than converting to YUV.)
Unfortunately, libswscale doesn't support planar RGB output. Add a hack
to sws_utils.c to handle conversion to planar RGB. In the common case,
when converting 32 bit per pixel RGB, calling swscale can be avoided
entirely.
The change in mp_image.c is needed to allocate GBRP images correctly.
(The issue with vo_x11 could be easily solved by always backing up the
same bounding box as the bitmap drawing RGB<->YUV conversion does, but
this commit is probably the better fix.)
This pixel format is sometimes used with yuv4mpeg.
vo_direct3d used its own IMGFMT_Y16 internally for some reason.
vo_opengl, vo_opengl_old, and vo_direct3d should be able to display
this pixel format natively.
Finish renaming directories and moving files. Adjust all include
statements to make the previous commit compile.
The two commits are separate, because git is bad at tracking renames
and content changes at the same time.
Also take this as an opportunity to remove the separation between
"common" and "mplayer" sources in the Makefile. ("common" used to be
shared between mplayer and mencoder.)
Tis drops the silly lib prefixes, and attempts to organize the tree in
a more logical way. Make the top-level directory less cluttered as
well.
Renames the following directories:
libaf -> audio/filter
libao2 -> audio/out
libvo -> video/out
libmpdemux -> demux
Split libmpcodecs:
vf* -> video/filter
vd*, dec_video.* -> video/decode
mp_image*, img_format*, ... -> video/
ad*, dec_audio.* -> audio/decode
libaf/format.* is moved to audio/ - this is similar to how mp_image.*
is located in video/.
Move most top-level .c/.h files to core. (talloc.c/.h is left on top-
level, because it's external.) Park some of the more annoying files
in compat/. Some of these are relicts from the time mplayer used
ffmpeg internals.
sub/ is not split, because it's too much of a mess (subtitle code is
mixed with OSD display and rendering).
Maybe the organization of core is not ideal: it mixes playback core
(like mplayer.c) and utility helpers (like bstr.c/h). Should the need
arise, the playback core will be moved somewhere else, while core
contains all helper and common code.