MPlayer traditionally always used the display aspect ratio, e.g. 16:9,
while FFmpeg uses the sample (aka pixel) aspect ratio.
Both have a bunch of advantages and disadvantages. Actually, it seems
using sample aspect ratio is generally nicer. The main reason for the
change is making mpv closer to how FFmpeg works in order to make life
easier. It's also nice that everything uses integer fractions instead
of floats now (except --video-aspect option/property).
Note that there is at least 1 user-visible change: vf_dsize now does
not set the display size, only the display aspect ratio. This is
because the image_params d_w/d_h fields did not just set the display
aspect, but also the size (except in encoding mode).
The vf_format suboption is replaced with --video-output-levels (a global
option and property). In particular, the parameter is removed from
mp_image_params. The mechanism is moved to the "video equalizer", which
also handles common video output customization like brightness and
contrast controls.
The new code is slightly cleaner, and the top-level option is slightly
more user-friendly than as vf_format sub-option.
This time (there are a lot of times), libswscale randomly ignores
brightness/saturation/contrast settings.
Looking at MPlayer code, it appears the return value of
sws_setColorspaceDetails() signals if changing these settings is
supported at all.
(Nevermind that supporting this feature has almost 0 value, and
obviously eats maintenance time.)
The intention is that we can test vo_opengl with high bit depth PNGs
better. This throws libswscale completely out of the loop, which before
was needed in order to convert from big endian to little endian.
Also apply a minimal cleanup to fmt-conversion.c (unrelated).
Additionally to removing the global variables, this makes the options
more uniform. --ssf-... becomes --sws-..., and --sws becomes --sws-
scaler. For --sws-scaler, use choices instead of magic integer values.
We needed this because the OSD rendering path used GBRP for RGB
rendering, and not all swscale versions supported this conversion. But
recently we've dropped support for very old ffmpeg/libav versions, so
this isn't needed anymore.
This requires the caller to provide a mp_log in order to see error
messages. Unfortunately we don't do this in most places, but I guess we
have to live with it.
In my opinion, config.h inclusions should be kept to a minimum. MPlayer
code really liked including config.h everywhere, though, even in often
used header files. Try to reduce this.
PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants)
enum PixelFormat -> enum AVPixelFormat
Losen some version checks in certain newer pixel formats.
av_pix_fmt_descriptors -> av_pix_fmt_desc_get
This removes support for FFmpeg 1.0.x, which is even older than
Libav 9.x. Support for it probably was already broken, and its
libswresample was rejected by our build system anyway because it's
broken.
Mostly untested; it does compile with Libav 9.9.
Change talloc destructor so that they can never signal failure, and
don't return a status code. This makes our talloc copy even more
incompatible to upstream talloc, but on the other hand this is
preparation for getting rid of talloc entirely.
(The talloc replacement in the next commit won't allow the talloc_free
equivalent to fail, and the destructor return value would be useless.
But I don't want to change any mpv code either; the idea is that the
talloc replacement commit can be reverted for some time in order to
test whether the talloc replacement introduced a regression.)
Until now, video output levels (obscure feature, like using TV screens
that require RGB output in limited range, similar to YUY) still required
handling of VOCTRL_SET_YUV_COLORSPACE. Simplify this, and use the new
mp_image_params code. This gets rid of some code. VOCTRL_SET_YUV_COLORSPACE
is not needed at all anymore in VOs that use the reconfig callback. The
result of VOCTRL_GET_YUV_COLORSPACE is now used only used for the
colormatrix related properties (basically, for display on OSD). For
other VOs, VOCTRL_SET_YUV_COLORSPACE will be sent only once after config
instead of twice.
This splits the monolithic mp_image_swscale() function into a bunch of
functions and a context struct. This means it's possible to set
arbitrary parameters (e.g. even obscure ones without getting in the
way), and you don't have to create the context on every call.
This code is preparation for removing duplicated libswscale API usage
from other parts of the code.
According to DOCS/OUTDATED-tech/colorspaces.txt, the following formats
are supposed to be palettized:
IMGFMT_BGR8
IMGFMT_RGB8,
IMGFMT_BGR4_CHAR
IMGFMT_RGB4_CHAR
IMGFMT_BGR4
IMGFMT_RGB4
Of these, only BGR8 and RGB8 are actually treated as palettized in some
way. ffmpeg has only one palettized format (AV_PIX_FMT_PAL8), and
IMGFMT_BGR8 was inconsistently mapped to packed non-palettized RGB
formats too (AV_PIX_FMT_BGR8). Moreover, vf_scale.c contained messy
hacks to generate a palette when AV_PIX_FMT_BGR8 is output. (libswscale
does not support AV_PIX_FMT_PAL8 output in the first place.)
Get rid of all of this, and introduce IMGFMT_PAL8, which directly maps
to AV_PIX_FMT_PAL8. Remove the palette creation code from vf_scale.c.
IMGFMT_BGR8 maps to AV_PIX_FMT_RGB8 (don't ask me why it's swapped),
without any palette use. Enabling it in vo_x11 or using it as vf_scale
input seems to give correct results.
Apparently the -spugauss option was popular. The code originally
implementing this is gone (scaler stuff in spudec.c). Reimplement it
using libswscale to scale and blur image subtitles if the --sub-gauss
option is set.
The code does some rather lazy padding to allow the blur to spread
pixels past the original image bounding box. (This problem exists with
normal bilinear scaling too, but is barely noticable.)
Technically, this doesn't just blur subtitles, but anything RGBA (or
indexed) that enters the OSD rendering path. But only image subtitles
produce these OSD formats currently, so no explicit check is done to
prevent blurring in other cases.
sws_getContextFromCmdLine_hq() was used by the screenshot code, which
now uses mp_image_swscale().
Also move the mp_sws_set_colorspace() declaration from sws_utils.h to
vf_scale.c.
As pointed out in commit ed01df, the quality loss due to frequent
conversion between RGB and YUV is too much when drawing OSD and
subtitles.
Fix this by staying in the same colorspace when drawing subtitles.
Render directly to RGB, without converting to YUV first.
The bad thing about packed RGB is that there are many pixel formats,
which would all require special code for blending. It's also completely
incompatible to planar YUV. Use planar RGB instead, which allows us to
reuse all code originally written for planar YUV. The only thing that
needs to be changed is the color conversion in the libass case. (In
exchange for simpler code, the image has to be copied, but this is
still much better than converting to YUV.)
Unfortunately, libswscale doesn't support planar RGB output. Add a hack
to sws_utils.c to handle conversion to planar RGB. In the common case,
when converting 32 bit per pixel RGB, calling swscale can be avoided
entirely.
The change in mp_image.c is needed to allocate GBRP images correctly.
(The issue with vo_x11 could be easily solved by always backing up the
same bounding box as the bitmap drawing RGB<->YUV conversion does, but
this commit is probably the better fix.)
Finish renaming directories and moving files. Adjust all include
statements to make the previous commit compile.
The two commits are separate, because git is bad at tracking renames
and content changes at the same time.
Also take this as an opportunity to remove the separation between
"common" and "mplayer" sources in the Makefile. ("common" used to be
shared between mplayer and mencoder.)
Tis drops the silly lib prefixes, and attempts to organize the tree in
a more logical way. Make the top-level directory less cluttered as
well.
Renames the following directories:
libaf -> audio/filter
libao2 -> audio/out
libvo -> video/out
libmpdemux -> demux
Split libmpcodecs:
vf* -> video/filter
vd*, dec_video.* -> video/decode
mp_image*, img_format*, ... -> video/
ad*, dec_audio.* -> audio/decode
libaf/format.* is moved to audio/ - this is similar to how mp_image.*
is located in video/.
Move most top-level .c/.h files to core. (talloc.c/.h is left on top-
level, because it's external.) Park some of the more annoying files
in compat/. Some of these are relicts from the time mplayer used
ffmpeg internals.
sub/ is not split, because it's too much of a mess (subtitle code is
mixed with OSD display and rendering).
Maybe the organization of core is not ideal: it mixes playback core
(like mplayer.c) and utility helpers (like bstr.c/h). Should the need
arise, the playback core will be moved somewhere else, while core
contains all helper and common code.