I doubt anyone needs to adjust hue on a frequent basis, and gamma is
much more useful.
Suggestions for more radical changes of key bindings are welcome
(there's a lot of useless crap mapped).
VFCAP_OSD was used to determine at runtime whether the VO supports OSD
rendering. This was mostly unused. vo_direct3d had an option to disable
OSD (was supposed to allow to force auto-insertion of vf_ass, but we
removed that anyway). vo_opengl_old could disable OSD rendering when a
very old OpenGL version was detected, and had an option to explicitly
disable it as well.
Remove VFCAP_OSD from everything (and some associated logic). Now the
vo_driver.draw_osd callback can be set to NULL to indicate missing OSD
support (important so that vo_null etc. don't single-step on OSD
redraw), and if OSD support depends on runtime support, the VO's
draw_osd should just do nothing if OSD is not available.
Also, do not access vo->want_redraw directly. Change the want_redraw
reset logic for this purpose, too. (Probably unneeded, vo_flip_page
resets it already.)
All wayland only specific routines are placed in wayland_common.
This makes it easier to write other video outputs.
The EGL specific parts, as well as opengl context creation, are in gl_common.
This backend works for:
* opengl-old
* opengl
* opengl-hq
To use it just specify the opengl backend
--vo=opengl:backend=wayland
or disable the x11 build.
Don't forget to set EGL_PLATFORM to wayland.
Co-Author: Scott Moreau
(Sorry I lost the old commit history due to the file structure changes)
This allowed making the player switch the monitor video mode when
creating the video window. This was a questionable feature, and with
today's LCD screens certainly not useful anymore. Switching to a random
video mode (going by video width/height) doesn't sound too useful
either.
I'm not sure about the win32 implementation, but the X part had several
bugs. Even in mplayer-svn (where x11_common.c hasn't been receiving any
larger changes for a long time), this code is buggy and doesn't do the
right thing anyway. (And what the hell _did_ it do when using multiple
physical monitors?)
If you really want this, write a shell script that calls xrandr before
and after calling mpv.
vo_sdl still can do mode switching, because SDL has native support for
it, and using it is trivial. Add a new sub-option for this.
The --wid switch (for embedding the player into other applications)
didn't create a new window, and instead tried to use the window that
was passed via --wid directly. This made the code more complex, caused
strange X errors (mpv and host application fighting for exclusive X
resources), and actually could cause issues if the --wid window wasn't
created with the X Visual needed for OpenGL.
Always create a window instead. This makes it always possible to embed
the player into foreign windows. --geometry doesn't work anymore - the
controlling application should always create a new window to place the
player inside it, and can control the video window by moving and
resizing this window.
w32_common.c actually did this right, and always creates a new window.
You can just use --wid=0 if you really want this.
This only worked/works for X11, and even then it might interact badly
with most desktop environments. All the option did was setting --wid to
0, and the property did nothing.
Use the option parser instead of sscanf. Remove the parameter changing
the field dominance (it has been marked deprecated for ages). Add a new
suboption "enabled", which can be used to disable the filter by default,
until it's enabled at runtime:
mpv -vf yadif=enabled=no
For all suboptions, "flat" options were available by separating the
parent option and the sub option with ":", e.g. "--rawvideo:w=123". Drop
this syntax and use "-" as separator. This means even suboptions are
available as normal options now, e.g. "--rawvideo-w=123". The old syntax
doesn't work anymore.
Note that this is completely separate from actual suboptions. For
example, "-rawvideo w=123:h=123" still works. (Not that this syntax is
worth supporting, but it's needed anyway, for for other things like vf
and vo suboptions.)
As a consequence of this change, we also have to add new "no-" prefixed
options for flag suboptions, so that "--no-input-default-bindings"
works. ("--input-no-default-bindings" also works as a consequence of
allowing "-input no-default-bindings" - they are handled by the same
underlying option.)
For --input, always use the full syntax in the manpage. There exist
suboptions other than --input (like --tv, --rawvideo, etc.), but since
they might be handled differently in the future, don't touch these yet.
M_OPT_PREFIXED becomes the default, so remove it. As a minor unrelated
cleanup, get rid of M_OPT_MERGE too and use the OPT_SUBSTRUCT() macro in
some places.
Unrelated: remove the duplicated --tv:buffersize option, fix a typo in
changes.rst.
`--fs-screen` allows to decide what display to go fullscreen into. The
semantics of `--screen` changed and now it is only used to select the windowed
display when starting the application.
This is useful for people using mpv with an external TV. They will start
windowed on their laptop's screen and switch to fullscreen on the TV.
@wm4 worked on the x11 and w32 parts of the code. All is squashed in one
commit for history clarity.
Being able to insert newline characters ("\n") is useful for
--osd-status-msg, and possibly also for anything that prints to the
terminal. Espcially --term-osd-esc looks relatively useless without
being able to specify escapes.
Maybe parsing escapes should happen during command line / config parsing
instead (for all options).
The previous name of this filter was misleading, because it doesn’t actually
normalize volume levels. What it does is closer to performing low-quality
dynamic range compression, hence it is now called af_drc.
Use codec names instead of FourCCs to identify codecs. Rewrite how
codecs are selected and initialized. Now each decoder exports a list
of decoders (and the codec it supports) via add_decoders(). The order
matters, and the first decoder for a given decoder is preferred over
the other decoders. E.g. all ad_mpg123 decoders are preferred over
ad_lavc, because it comes first in the mpcodecs_ad_drivers array.
Likewise, decoders within ad_lavc that are enumerated first by
libavcodec (using av_codec_next()) are preferred. (This is actually
critical to select h264 software decoding by default instead of vdpau.
libavcodec and ffmpeg/avconv use the same method to select decoders by
default, so we hope this is sane.)
The codec names follow libavcodec's codec names as defined by
AVCodecDescriptor.name (see libavcodec/codec_desc.c). Some decoders
have names different from the canonical codec name. The AVCodecDescriptor
API is relatively new, so we need a compatibility layer for older
libavcodec versions for codec names that are referenced internally,
and which are different from the decoder name. (Add a configure check
for that, because checking versions is getting way too messy.)
demux/codec_tags.c is generated from the former codecs.conf (minus
"special" decoders like vdpau, and excluding the mappings that are the
same as the mappings libavformat's exported RIFF tables). It contains
all the mappings from FourCCs to codec name. This is needed for
demux_mkv, demux_mpg, demux_avi and demux_asf. demux_lavf will set the
codec as determined by libavformat, while the other demuxers have to do
this on their own, using the mp_set_audio/video_codec_from_tag()
functions. Note that the sh_audio/video->format members don't uniquely
identify the codec anymore, and sh->codec takes over this role.
Replace the --ac/--vc/--afm/--vfm with new --vd/--ad options, which
provide cover the functionality of the removed switched.
Note: there's no CODECS_FLAG_FLIP flag anymore. This means some obscure
container/video combinations (e.g. the sample Film_200_zygo_pro.mov)
are played flipped. ffplay/avplay doesn't handle this properly either,
so we don't care and blame ffmeg/libav instead.
Simplify --no-config and make it a normal flag option, and doesn't take
an argument anymore. You can get the same behavior by using --no-config
and then --include to explicitly load a certain config file.
Make --no-config work for input.conf as well. Make it so that
--input:conf=file still works in this case. As a technically unrelated
change, the file argument now works as one would expect, instead of
making it relatively to "~/.mpv/". This makes for simpler code and
easier to understand option semantics. We can also print better error
messages.
There were two option syntax variations:
"old": -opt value
"new": --opt=value
"-opt=value" was invalid, and "--opt value" meant "--opt=" followed by
a separate option "value" (i.e. interpreted as filename). There isn't
really any reason to do this. The "old" syntax used to be ambiguous
(you had to call the option parser to know whether the following
argument is an option value or a new option), but that has been removed.
Further, using "=" in the option string is always unambiguous.
Since the distinction between the two option variants is confusing,
just remove the difference and allow "--opt value" and "-opt=value".
To make this easier, do some other cleanups as well (e.g. avoid having
to do a manual lookup of the option just to check for M_OPT_PRE_PARSE,
which somehow ended up with finally getting rid of the m_config.mode
member).
Error reporting is still a mess, and we opt for reporting too many
rather than too few errors to the user.
There shouldn't be many user-visible changes. The --framedrop and
--term-osd options now always require parameters.
The --mute option is intentionally made ambiguous: it works like a flag
option, but a value can be passed to it explicitly ("--mute=auto"). If
the interpretation of the option is ambiguous (like "--mute auto"), the
second string is interpreted as separate option or filename. (Normal
flag options are actually ambiguous in this way too.)
This could write .edl files in MPlayer's format. Support for playing
these files has been removed from mplayer2 quite a while ago. (mplayer2
can play its own, "new" .edl format, but does not support writing it.)
Since this is a rather obscure functionality, and it's not really clear
how it should behave (e.g. what should it do if a new file is played),
and wasn't all that great to begin with (what if you made a mistake?
the "edl_mark" command sucks for editing), get rid of it.
Suggestions how to reimplement this in a nicer way are welcome. If it's
just about retrieving timecodes, this in input.conf will do:
KEY print_text "position: ${=time-pos}"
Simply removed the assumption that the user is using `mpv-build`. Now provide 3
lines of shell that can be copy-pasted by the user for instant gratification
(and independent from $PWD).
Remove screenshot_force and associated logic. Always try to use the
screenshot video filter before trying taking screenshots with the VO,
which means that --vf=screenshot now takes the role of --vf=screenshot_force.
(To make this clear, not adding a video filter is still the recommended
way to take screenshots; we just change how VF screenshots are forced.)
Preferring VO over VF and having --vf=screenshot_force used to make
sense when not all VOs supported screenshots, and some VOs had somewhat
broken screenshots (like vo_xv taking screenshots with OSD in it). But
all these issues are fixed now, so just get rid of the cruft.
Dithering was disabled if the input bit depth was not larger than the
output bit depth of the screen framebuffer. But since scaling, RGB
conversion, and other filters change the number of significant bits
anyway, dithering could still benefit image quality even in these
cases. Always do dithering, unless dithering is completely disabled.
The original intention of this mechanism was not to change the image
needlessly when playing video that matches the native bit depth of the
screen.
The "http:" protocol has been switched to use ffmpeg's HTTP
implementation some time ago. One problem with this was that many HTTP
specific options stopped working, because they were obviously
implemented for the internal HTTP implementation only.
Add the missing things. Note that many options will work for ffmpeg
only, as Libav's HTTP implementation is missing these. They will
silently be ignored on Libav.
Some options we can't fix:
--ipv4-only-proxy, --prefer-ipv4, --prefer-ipv6
As far as I can see, not even libavformat internals distinguish
between ipv4 and ipv6.
--user, --passwd
ffmpeg probably supports specifying these in the URL directly.
-x/-y were rather useless and obscure. The only use I can see is
forcing a specific aspect ratio without having to calculate the aspect
ratio float value (although --aspect takes values of the form w:h).
This can be also done with --geometry and --no-keepaspect. There was
also a comment that -x/-y is useful for -vm, although I don't see how
this is useful as it still messes up aspect ratio.
-xy is mostly obsolete. It does two things: a) set the window width to
a pixel value, b) scale the window size by a factor. a) is already done
by --autofit (--autofit=num does exactly the same thing as --xy=num, if
num >= 8). b) is not all that useful, so we just drop that
functionality.
--autofit=WxH sets the window size to a maximum width and/or height,
without changing the window's aspect ratio.
--autofit-larger=WxH does the same, but only if the video size is
actually larger than the window size that would result when using
the --autofit=WxH option with the same arguments.
Now all numbers in the --geometry specification can take percentages.
Rewrite the parsing of --geometry, because adjusting the sscanf() mess
would require adding all the combinations of using and not using %. As
a side effect, using % and pixel values can be freely mixed.
Keep the aspect if only one of width or height is set. This is more
useful in general.
Note: there is one semantic change: --geometry=num used to mean setting
the window X position, but now it means setting the window width.
Apparently this was a mplayer-specific feature (not part of standard X
geometry specifications), and it doesn't look like an overly useful
feature, so we are fine with breaking it.
In general, the new parsing should still adhere to standard X geometry
specification (as used by XParseGeometry()).
The video filter chain traditionally used FourCCs for pixel formats.
This was recently changed, but some parts of the manpage were not
updated properly. Now there are two rypes of options: some which take
a FourCC (as used with raw video formats), and some which take a
symbolic format identifier (as used in the video filter chain).
I realize that it's harder to specify FourCC for RGB formats now (TV
stuff may need RGB). They use non-printable characters as part of the
FourCC, and have to be specified as hexadecimal numbers (instead of
a symbolic identifier). Because I can't be bothered to find out what
these numbers are for the respective formats, just remove the old
pseudo-FourCCs from the documentation.
This printed per-frame statistics into a file, like bitrate or frame
type. Not very useful and accesses obscure AVCodecContext fields
(danger of deprecation/breakage), so get rid of it.
This was a "broken misfeature" according to Libav developers. It wasn't
implemented for modern codecs (like h264), and has been removed from
Libav a while ago (the AVCodecContext field has been marked as
deprecated and its value is ignored). FFmpeg still supports it, but
isn't much useful due to aforementioned reasons.
Remove the code to enable it.
This was an awkward hack that attempted to avoid the use of 16 bit
textures, while still allowing rendering 10-16 bit YUV formats. The
idea was that even if the hardware doesn't support 16 bit textures,
an A8L8 textures could be used to convert 10 bit (etc.) to 8 bit in
the shader, instead of doing this on the CPU.
This was an experiment, disabled by default, and was (probably) rarely
used. I've never heard of this being used successfully. Remove it.
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
Deprecate the hardware specific video codec entries (like ffh264vdpau).
Replace them with the --hwdec switch, which requests that a specific
hardware decoding API should be used. The codecs.conf entries will be
removed at a later time, but for now they are useful for testing and
compatibility.
Instead of --vc=ffh264vdpau, --hwdec=vdpau should be used.
Add a fallback if hardware decoding fails. Most hardware decoders
(including vdpau) support only a subset of h264, and having such a
fallback is supposed to enable a better user experience.
Slices allowed filtering or drawing video in horizontal bands or
blocks. This allowed working on the video in smaller units. In theory,
this could bring a performance win by lowering cache pressure, as you
didn't have to keep the whole video frame in cache while filtering,
only the slice.
In practice, the slice code path was barely used for the following
reasons:
- Multithreaded decoding with ffmpeg didn't use slices. The ffmpeg
slice callback was disabled, because it can be called from another
thread, and the mplayer video chain is not thread-safe.
- There was nothing that would turn "full" images into appropriate
slices, so slices were rarely used.
- Most filters didn't actually support slices.
On the other hand, supporting slices lead to code duplication and more
complex code in general. I made some experiments and didn't find any
actual measurable performance improvements when using slices. Even
ffmpeg removed slices based filtering from libavfilter in favor of
simpler code.
The most broken thing about the slices code path is that slices can't
be queued, like it is done for images in vo.c.
This is simpler and more useful. We could add a new switch for the old
functionality, but that would probably be more confusing than helpful.
When passing only a single file to the command line, this commit
shouldn't change behavior.
(Classic mplayer provided both features by duplicating the loop
functionality in the "playtree".)
Setting some subtitle options may lead to incorrect rendering of complex
ASS subtitle scripts, such as displaced signs or visual artifacts. The
user should be made aware that this can happen.
In theory, libass could make using some of these options relatively
safe, but it doesn't.
Note that there are potentially much more options that could in theory
break subtitle rendering, but add a warning only to the most fragile
ones.
Before this commit, the --osd-* options (like --osd-font-size etc.)
configured both the OSD and subtitle font. Make them separate, and add
--sub-text-* options (like --sub-text-size etc.). Now --osd-* affects
the OSD font only, and --sub-text-* unstyled text subtitles only.
They were more or less grouped by usefulness, but since everything
else in the manpage is sorted alphabetically, it's better to be
consistent and sort these options as well.