Signed-off-by: wm4 <wm4@nowhere>
Rename --stats to --load-stats-overlay and add an entry to options.rst
over the original commit.
Signed-off-by: wm4 <wm4@nowhere>
At the moment, rendering on Android requires ``--vo=opengl-cb`` and
a lot of java<->c++ bridging code to receive the receive and react to
the render callback in java. Performance also suffers with opengl-cb,
due to the overhead of context switching in JNI.
With this patch, Android can render using ``--vo=gpu --gpu-context=android``
(after setting ``--wid`` to point to an android.view.Surface on-screen).
But --msg-level can only raise the log level used for --log-file,
because the original idea with --log-file was that it'd log verbose
messages to disk even if terminal logging is lower than -v or fully
disabled.
- Don't recommend libdvdnav, since DVD support isn't compiled by default
anymore.
- Take advantage of the new $MINGW_PACKAGE_PREFIX and $MSYSTEM_PREFIX
variables to make the build commands independent from the mingw-w64
build environment being used.
- Invoke /usr/bin/python3 directly, since I've heard some packages have
started to depend on mingw-w64 versions of Python, but our build
scripts only work with the MSYS2 version.
- Reword the MSYS2 install instructions to try to prevent common errors.
Changed the reference from --gpu-gamma to --gamma-factor,
and changed the reference from --post-shader to --glsl-shaders,
in order to reflect actual changes to the option names.
Seems to be fixed upstream in the nvidia driver, so it's probably a good
idea to 1. force the layout and 2. remove the warning, as it now
actually works. Users with older drivers would run into errors, but they
can still use shaderc as a replacement. (And it's not like the old
status quo was any better)
This has several advantages:
1. no more redundant texcoords when we don't need them
2. no more arbitrary limit on how many textures we can bind
3. (that extends to user shaders as well)
4. no more arbitrary limits on tscale radius
To realize this, the VAO was moved from a hacky stateful approach
(gl_sc_set_vertex_attribs) - which always bothered me since it was
required for compute shaders as well even though they ignored it - to be
a proper parameter of gl_sc_dispatch_draw, and internally plumbed into
gl_sc_generate, which will make a (properly mangled) deep copy into
params.vertex_attribs.
Apparently this filter is broken in a weird way, which even makes some
libavfilter functions segfault in certain conditions. Don't waste time
with it and just remove the examples.
Also adjust the "life" example description (certainly this filter is
100% worthless, but the example does demonstrate how to use source
filters without any available input).
auto-copy selects more modes than the ones listed. It will always be
outdated anyway.
The GLX vaapi backend is never selected anymore, because it sucks.
In addition to the built-in nvidia compiler, we now also support a
backend based on libshaderc. shaderc is sort of like glslang except it
has a C API and is available as a dynamic library.
The generated SPIR-V is now cached alongside the VkPipeline in the
cached_program. We use a special cache header to ensure validity of this
cache before passing it blindly to the vulkan implementation, since
passing invalid SPIR-V can cause all sorts of nasty things. It's also
designed to self-invalidate if the compiler gets better, by offering a
catch-all `int compiler_version` that implementations can use as a cache
invalidation marker.
This time based on ra/vo_gpu. 2017 is the year of the vulkan desktop!
Current problems / limitations / improvement opportunities:
1. The swapchain/flipping code violates the vulkan spec, by assuming
that the presentation queue will be bounded (in cases where rendering
is significantly faster than vsync). But apparently, there's simply
no better way to do this right now, to the point where even the
stupid cube.c examples from LunarG etc. do it wrong.
(cf. https://github.com/KhronosGroup/Vulkan-Docs/issues/370)
2. The memory allocator could be improved. (This is a universal
constant)
3. Could explore using push descriptors instead of descriptor sets,
especially since we expect to switch descriptors semi-often for some
passes (like interpolation). Probably won't make a difference, but
the synchronization overhead might be a factor. Who knows.
4. Parallelism across frames / async transfer is not well-defined, we
either need to use a better semaphore / command buffer strategy or a
resource pooling layer to safely handle cross-frame parallelism.
(That said, I gave resource pooling a try and was not happy with the
result at all - so I'm still exploring the semaphore strategy)
5. We aggressively use pipeline barriers where events would offer a much
more fine-grained synchronization mechanism. As a result of this, we
might be suffering from GPU bubbles due to too-short dependencies on
objects. (That said, I'm also exploring the use of semaphores as a an
ordering tactic which would allow cross-frame time slicing in theory)
Some minor changes to the vo_gpu and infrastructure, but nothing
consequential.
NOTE: For safety, all use of asynchronous commands / multiple command
pools is currently disabled completely. There are some left-over relics
of this in the code (e.g. the distinction between dev_poll and
pool_poll), but that is kept in place mostly because this will be
re-extended in the future (vulkan rev 2).
The queue count is also currently capped to 1, because of the lack of
cross-frame semaphores means we need the implicit synchronization from
the same-queue semantics to guarantee a correct result.
This never really made sense since the BT.1886 changes. It should get
*brighter* for bright rooms, not darker for dark rooms. Picked some new
values that seemed reasonable-ish.
This is done in several steps:
1. refactor MPGLContext -> struct ra_ctx
2. move GL-specific stuff in vo_opengl into opengl/context.c
3. generalize context creation to support other APIs, and add --gpu-api
4. rename all of the --opengl- options that are no longer opengl-specific
5. move all of the stuff from opengl/* that isn't GL-specific into gpu/
(note: opengl/gl_utils.h became opengl/utils.h)
6. rename vo_opengl to vo_gpu
7. to handle window screenshots, the short-term approach was to just add
it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to
ra itself (and vo_gpu altered to compensate), but this was a stop-gap
measure to prevent this commit from getting too big
8. move ra->fns->flush to ra_gl_ctx instead
9. some other minor changes that I've probably already forgotten
Note: This is one half of a major refactor, the other half of which is
provided by rossy's following commit. This commit enables support for
all linux platforms, while his version enables support for all non-linux
platforms.
Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the
--opengl- options like --opengl-early-flush, --opengl-finish etc. Should
be a strict superset of the old functionality.
Disclaimer: Since I have no way of compiling mpv on all platforms, some
of these ports were done blindly. Specifically, the blind ports included
context_mali_fbdev.c and context_rpi.c. Since they're both based on
egl_helpers, the port should have gone smoothly without any major
changes required. But if somebody complains about a compile error on
those platforms (assuming anybody actually uses them), you know where to
complain.
This mechanism uses system() and shouldn't even exist. x11_common.c has
its own solution for the original problem (disabling Linux DE
screensavers without MPlayer/mpv having to link a dbus lib). If that is
not sufficient, you can create a simple Lua script.
Incidentally fixes#4888.
This clearly highlights all out-of-gamut/clipped pixels. (Either too
bright or too saturated)
Has some (documented) caveats. Also make TONE_MAPPING_CLIP stop actually
clamping the value range (it's unnecessary and breaks this feature).
Mouse wheel bindings have always been a cause of user confusion.
Previously, on Wayland and macOS, precise touchpads would generate AXIS
keycodes and notched mouse wheels would generate mouse button keycodes.
On Windows, both types of device would generate AXIS keycodes and on
X11, both types of device would generate mouse button keycodes. This
made it pretty difficult for users to modify their mouse-wheel bindings,
since it differed between platforms and in some cases, between devices.
To make it more confusing, the keycodes used on Windows were changed in
18a45a42d5 without a deprecation period or adequate communication to
users.
This change aims to make mouse wheel binds less confusing. Both the
mouse button and AXIS keycodes are now deprecated aliases of the new
WHEEL keycodes. This will technically break input configs on Wayland and
macOS that assign different commands to precise and non-precise scroll
events, but this is probably uncommon (if anyone does it at all) and I
think it's a fair tradeoff for finally fixing mouse wheel-related
confusion on other platforms.
mpv's mouse button numbering is based on X11 button numbering, which
allows for an arbitrary number of buttons and includes mouse wheel input
as buttons 3-6. This button numbering was used throughout the codebase
and exposed in input.conf, and it was difficult to remember which
physical button each number actually referred to and which referred to
the scroll wheel.
In practice, PC mice only have between two and five buttons and one or
two scroll wheel axes, which are more or less in the same location and
have more or less the same function. This allows us to use names to
refer to the buttons instead of numbers, which makes input.conf syntax a
lot easier to remember. It also makes the syntax robust to changes in
mpv's underlying numbering. The old MOUSE_BTNx names are still
understood as deprecated aliases of the named buttons.
This changes both the input.conf syntax and the MP_MOUSE_BTNx symbols in
the codebase, since I think both would benefit from using names over
numbers, especially since some platforms don't use X11 button numbering
and handle different mouse buttons in different windowing system events.
This also makes the names shorter, since otherwise they would be pretty
long, and it removes the high-numbered MOUSE_BTNx_DBL names, since they
weren't used.
Names are the same as used in Qt:
https://doc.qt.io/qt-5/qt.html#MouseButton-enum
This removes all GPL only code from it, and that's the whole purpose.
Also happens to be much simpler.
The "deinterlace" option still sort of exists, but only as runtime
changeable option. The main change in behavior is that the property will
not report back the actual deint state. Or in other words, if inserting
or initializing the filter fails, the deinterlace property will still
return "yes". This is in line with most recent behavior changes to
properties and options.
Both the video equalizer command/option glue, which drives this filter,
as well as the filter itself are slightly GPL contaminated. So it goes.
After this commit, "--vf=eq" will actually use libavfilter's vf_eq (if
FFmpeg was compiled in GPL mode), but it has different options and will
not listen to the equalizer VOCTRLs.
Completely untested (rsound dev libs unavailable on my system). Trivial
enough that it's very likely that it'll just work. No port selection,
but could be added by parsing it as part of the device name.
Should fix#4714.
This adds handling of spherical video metadata: retrieving it from
demux_lavf and demux_mkv, passing it through filters, and adjusting it
with vf_format. This does not include support for rendering this type of
video.
We don't expect we need/want to support the other projection types like
cube maps, so we don't include that for now. They can be added later as
needed.
Also raise the maximum sizes of stringified image params, since they
can get really long.
Tends to be somewhat glitchy if subtitles are enabled, and you enable
and disable tracks.
On error, this will disable --lavfi-complex, which will result in
whatever behavior.
Move multiple GL-specific things from the renderer to other places like
vo_opengl.c, vo_opengl_cb.c, and ra_gl.c.
The vp_w/vp_h parameters to gl_video_resize() make no sense anymore, and
are implicitly part of struct fbodst.
Checking the main framebuffer depth is moved to vo_opengl.c. For
vo_opengl_cb.c it always assumes 8. The API user now has to override
this manually. The previous heuristic didn't make much sense anyway.
The only remaining dependency on GL is the hwdec stuff, which is harder
to change.
Further work removing GL dependencies from the actual video renderer,
and moving them into ra backends.
Use of glInvalidateFramebuffer() falls away. I'd like to keep this, but
it's better to readd it once shader runs are in ra.
This was attempted before in fc9695e63b, but it was reverted in
1b7ce759b1 because it caused conflicts with other software watching
the same keys (See #2041.) It seems like some PCs ship with OEM software
that watches the volume keys without consuming key events and this
causes them to be handled twice, once by mpv and once by the other
software.
In order to prevent conflicts like this, use the WM_APPCOMMAND message
to handle media keys. Returning TRUE from the WM_APPCOMMAND handler
should indicate to the operating system that we consumed the key event
and it should not be propogated to the shell. Also, we now only listen
for keys that are directly related to multimedia playback (eg. the
APPCOMMAND_MEDIA_* keys.) Keys like APPCOMMAND_VOLUME_* are ignored, so
they can be handled by the shell, or by other mixer software.
This currently only works when using lcms-based color management
(--icc-profile-*).
In principle, we could also support using lcms even when the user has
not specified an ICC profile, by generating the profile against a fixed
reference (--target-prim/--target-trc) instead. I still might do that
some day, simply because 3dlut provides a higher quality conversion than
our simple gamut mapping does for stuff like BT.2020, and also because
it's now needed to enable embedded ICC profiles. But that would be a
separate change, so preserve the status quo for now.
(Besides, my opinion is still that you should be using an ICC profile if
you care about colors being accurate _at all_)
This broke float textures, which were actually used by some shaders.
There were probably some other bugs as well.
Lots of code can be avoided by using ra_tex_params directly, so do that.
The main change is that COMPONENT/FORMAT are replaced by a single FORMAT
directive, which takes different parameters now. Due to the mess with
16/32 bit float textures, and because we want to support other APIs than
just GL in the future, it's not really clear how this should be handled,
and the nice component/type separation makes things actually harder. So
just jump the gun and use the ra_format.name names, which were
originally meant mostly for debugging. (This is probably something that
will be regretted later.)
Still only superficially tested, but seems to work.
Fixes#4708.
Since this code was already written for HDR, and is now per-channel
(because it works better for HDR as well), we can actually reuse this to
get very high quality gamut mapping without clipping. The only required
change is to move the tone mapping from before the gamut map to after
the gamut map. Additonally, we need to also account for changes in the
signal range as a result of applying the CMS when we compute ref_peak,
which is fortunately pretty easy because we only need to consider the
case of primaries mapping to themselves.
Since `HDR` no longer really makes sense as a label, rename it to
`--tone-mapping` in general. Also fits better with
`--tone-mapping-desat` etc.
Arguably we could also rename `--hdr-compute-peak`, but that option is
basically only useful for HDR content anyway because we don't need
information about the signal range for gamut mapping.
This (finally!) gives us reasonably high quality gamut mapping even in
the absence of an ICC profile / 3DLUT.
Parsing the texture data as raw strings makes the textures the most
portable and self-contained. In order to facilitate different types of
shaders, the parse_user_shader interaction has been changed to instead
have it loop through blocks and call the passed functions for each valid
block parsed. This is more modular and also cleaner, with better code
separation.
Closes#4586.
Two changes, compounded into one since they affect the same logic:
1. Never use linearization for HDR downscaling
2. Always use linearization for interpolation
Instead of fixing p->use_linear at the beginning of pass_render_frame,
we flip it on "dynamically" as needed. I plan on killing this
p->use_linear frame (along with other per-pass metadata) and moving them
into their own struct for tracking the "current" state of the video, but
that's a separate/upcoming refactor.
As a small bonus, reduce some code duplication in the interpolation
logic.
Fixes#4631
I've found more test cases where hwdec=cuda shits itself, even
hwdec=cuda-copy. So the whole “copyback is no worse than swdec” is
simply not true. Also, in the light of 10 bit media files and APIs
silently truncating to 8 bit, the warnings need to be generalized a bit.
It's no longer safe to say that “doesn't convert to RGB” means “perfect
playback”.
I've also added a very strong disclaimer to the whole hwdec scenario
clarifying why hwdec is usually a bad idea unless absolutely needed,
because I've seen issue after issue that is resolved by disabling hwdec.
This is done via compute shaders. As a consequence, the tone mapping
algorithms had to be rewritten to compute their known constants in GLSL
(ahead of time), instead of doing it once. Didn't affect performance.
Using shmem/SSBO atomics in this way is extremely fast on nvidia, but it
might be slow on other platforms. Needs testing.
Unfortunately, setting up the SSBO still requires OpenGL calls, which
means I can't have it in video_shaders.c, where it belongs. But I'll
defer worrying about that until the backend refactor, since then I'll be
breaking up the video/video_shaders structure anyway.
Can be enabled via --vd-lavc-dr=yes. See manpage additions for what it
does.
This reminds of the MPlayer -dr flag, but the implementation is
completely different. It's the same basic concept: letting the decoder
render into a GPU buffer to avoid a copy. Unlike MPlayer, this doesn't
try to go through filters (libavfilter doesn't support this anyway).
Unless a filter can work in-place, DR will be silently disabled. MPlayer
had very complex semantics about buffer types and management (which
apparently nobody ever understood) and weird restrictions that mostly
limited it to mpeg2 style codecs. The mpv code does not do any of this,
and just lets the decoder allocate an arbitrary number of untyped
images. (No MPlayer code was used.)
Parts of the code based on work by atomnuker (starting point for the
generic code) and haasn (some GL definitions, some basic PBO code, and
correct fencing).
Remove this code because it could be argued that it contains GPL-only
code (see commit 642e963c86 for details).
The remaining aspect methods appear to work just as well, are
potentially more compatible to other players, and the code becomes much
simpler.
Performance seems pretty much unchanged but I no longer get nasty spikes
on NUMA systems, probably because glBufferSubData runs in the driver or
something.
As a simplification of the code, we also just size the PBO to always
have the full size, even for cropped textures. This seems slower but not
by relevant amounts, and only affects e.g. --vf=crop. It also slightly
increases VRAM usage for textures with big strides.
This new code path is especially nice because it no longer depends on
GL_ARB_map_buffer_range, and no longer uses any functions that can
possibly fail, thus simplifying control flow and seemingly deprecating
the manpage's claim about possible image corruption.
In theory we could also reduce NUM_PBO_BUFFERS since it doesn't seem
like we're streaming uploads anyway, but leave it in there just in
case some drivers disagree...
It used to use the "encoding" section. Change this to the default
section to remove another small special case. encoding-profiles.conf
didn't use this by default anyway. The previous revert could mitigate
potential impacts of this a little.
This is more of a niche usecase than --ytdl-format and --ytdl-raw-options,
so a simple script option should be enough.
Either create lua-settings/ytdl_hook.conf with
'exclude=example.com,sub.example.com' option or
"--script-opts=ytdl_hook-exclude=example.com,sub.example.com"
This just indicates a fixed linear coefficient to multiply into the
signal, similar to the old option --target-brightness (but the inverse
thereof). Good for testing purposes, which is why I added it. (This also
corresponds somewhat to what zimg does)
It's now possible to request non-dumb mode as a user, even when not
using any non-dumb features. This change is mostly intended for testing,
so I can easily switch between dumb and non-dumb mode on default
settings. The default behavior is unaffected.
Was at least somewhat broken, and is misleading. I don't really have an
idea why FFmpeg has two AVOptions here anyway. We don't need to care,
and I'm only aware of 1 user trying this option ever.
See #4579.
This is exposed so that bjin/mpv-prescalers can use textureGatherOffset
for performance.
Since there are now quite a lot of parameters where it isn't quite clear
why they're all defined, add a paragraph to the man page that explains
them a bit.
This helps prevent unnaturally, weirdly colorized blown out highlights
for direct images of the sunlit sky and other way-too-bright HDR
content. I was debating whether to set the default at 1.0 or 2.0, but
went with the more conservative option that preserves more detail/color.
This is more efficient on my machine (nvidia), but only when applied to
groups of exactly 4 texels. So we switch to the more efficient
textureGather for groups of 4. Some notes:
- textureGatherOffset seems to be faster than textureGather by a
non-negligible amount, but for some reason, textureOffset is still
slower than a straight-up texture
- textureGather* requires GLSL 400; and at least on nvidia, this
requires actually allocating a GL 4.0 context.
- the code in opengl/common.c that clamped the GLSL version to 330 is
deprecated, because the old user shader style has been removed
completely in the meantime
- To combat the growing complexity of the polar sampling code, we drop
the antiringing functionality from EWA shaders completely, since it
never really worked well for EWA to begin with. (Horrific artifacting)
This allows filter functions to be prematurely cut off once their
contributions start becoming insignificant. This effectively prevents
wasted GPU time sampling from parts of the function that are essentially
reduced to zero by the window function, providing anywhere from a 10% to
20% speedup. (5700μs -> 4700μs for me)
This is more confusing than it helps, and forces escaping more stuff.
For example, for string lists we could remove all need for escaling with
-add and -pre.
The user can simply use multiple of those options.
This replaces `vo-performance` by `vo-passes`, bringing with it a number
of changes and improvements:
1. mpv users can now introspect the vo_opengl passes, which is something
that has been requested multiple times.
2. performance data is now measured per-pass, which helps both
development and debugging.
3. since adding more passes is cheap, we can now report information for
more passes (e.g. the blit pass, and the osd pass). Note: we also
switch to nanosecond scale, to be able to measure these passes
better.
4. `--user-shaders` authors can now describe their own passes, helping
users both identify which user shaders are active at any given time
as well as helping shader authors identify performance issues.
5. the timing data per pass is now exported as a full list of samples,
so projects like Argon-/mpv-stats can immediately read out all of the
samples and render a graph without having to manually poll this
option constantly.
Due to gl_timer's design being complicated (directly reading performance
data would block, so we delay the actual read-back until the next _start
command), it's vital not to conflate different passes that might be
doing different things from one frame to another. To accomplish this,
the actual timers are stored as part of the gl_shader_cache's sc_entry,
which makes them unique for that exact shader.
Starting and stopping the time measurement is easy to unify with the
gl_sc architecture, because the existing API already relies on a
"generate, render, reset" flow, so we can just put timer_start and
timer_stop in sc_generate and sc_reset, respectively.
The ugliest thing about this code is that due to the need to keep pass
information relatively stable in between frames, we need to distinguish
between "new" and "redrawn" frames, which bloats the code somewhat and
also feels hacky and vo_opengl-specific. (But then again, this entire
thing is vo_opengl-specific)
The changes to path list options is basically getting rid of the need to
pass multiple paths to a single option. Instead, you can use the option
multiple times. The old behavior can be used by using the -set suffix
with the option.
Change some options to path lists. For example --script is now append by
default, and if you use --script-set, you need to use ":"/";" as
separator instead of ",".
--sub-paths/--audio-file-paths is a deprecated alias now, and will break
if the user tries to pass multiple paths to it. I'm assuming that if
these are used, most users will pass only 1 path anyway.
--opengl-shaders has more compatibility handling, since it's probably
rather common that users pass multiple options to it.
Also document all that in the manpage.
I'll probably regret this later, as it somewhat increases the complexity
of the option parser, rather than increasing it.
I noticed that the previous default, bitstream, actually breaks with
some shitty anamorphic DVD rips that signal square pixel aspect in the
bitstream. So I think the "container" method is a better default.
You could do mpv_set_option(h, "no-fs", ""), which would behave like
"--no-fs" on the command line. At one point, this had to be emulated for
compatibility, and printed a deprecation warning. This was almost a year
ago, so remove it.
This was especially grating because it causes problems with the
option/property unification, uses as only thing OPT_FLAG_STORE, and
behaves weird with the client API or scripts.
It can be reimplemented in a much simpler way, although it needs
slightly more code. (Simpler because less special cases.)
cehoyos, who did not agree to the LGPL relicensing, added this in commit
240b743e. The actual implementation of it is already guarded with
HAVE_GPL. The field_dominance field in the option struct won't be
guarded.
We won't keep GPL-only core code forever, so deprecate it as well. To
apply forced deinterlacing, a libavfilter filter can probably be
removed, or we merge this functionality into the --deinterlace option
(without using copyrighted stuff).
It was extended by "seru" in 8d190244. This person could not be reached
(or does not reply), and it's in the way of LGPL relicensing. Deprecate
it, and mark the (probably) affected parts of the code with HAVE_GPL. To
be fair, even though the osd.c parts were refactored from the original
code, there's probably no copyright by seru on it. But for now play it
save. The mere existence of a 3rd OSD level is certainly not
copyrightable, so you still can set osd-level to 3 - just that it does
nothing.
A relic of mplayer had 'I' as the keybind to show the filename of the
currently playing file on the OSD. mpv does not do this by default.
This commit removes this incorrect information from the mpv manage.
This introduces (yet another..) mp_colorspace members, an enum `light`
(for lack of a better name) which basically tells us whether we're
dealing with scene-referred or display-referred light, but also a bit
more metadata (in which way is the scene-referred light expected to be
mapped to the display?).
The addition of this parameter accomplishes two goals:
1. Allows us to actually support HLG more-or-less correctly[1]
2. Allows people playing back direct “camera” content (e.g. v-log or
s-log2) to treat it as scene-referred instead of display-referred
[1] Even better would be to use the display-referred OOTF instead of the
idealized OOTF, but this would require either native HLG support in
LittleCMS (unlikely) or more communication between lcms.c and
video_shaders.c than I'm remotely comfortable with
That being said, in principle we could switch our usage of the BT.1886
EOTF to the BT.709 OETF instead and treat BT.709 content as being
scene-referred under application of the 709+1886 OOTF; which moves that
particular conversion from the 3dlut to the shader code; but also allows
a) users like UliZappe to turn it off and b) supporting the full HLG
OOTF in the same framework. But I think I prefer things as they are
right now.
st2084 and std-b67 are really weird names for PQ and HLG, which is what
everybody else (including e.g. the ITU-R) calls them. Follow their
example.
I decided against naming them bt2020-pq and bt2020-hlg because it's not
necessary in this case. The standard name is only used for the other
colorspaces etc. because those literally have no other names.
List of changes:
1. Kill nom_peak, since it's a pointless non-field that stores nothing
of value and is _always_ derived from ref_white anyway.
2. Kill ref_white/--target-brightness, because the only case it really
existed for (PQ) actually doesn't need to be this general: According
to ITU-R BT.2100, PQ *always* assumes a reference monitor with a
white point of 100 cd/m².
3. Improve documentation and comments surrounding this stuff.
4. Clean up some of the code in general. Move stuff where it belongs.
"Almost" because this might contain copyright by michael, who agreed
with LGPL, but only once the core is LGPL. This is preparation for that
to happen.
Apart from that, the usual remarks apply. In particular, dec_video.c
started out quite chaotic with no modularization, but was later
basically gutted, and in general rewritten a bunch of times. Not going
to give a history lesson.
Special attention needs to be given to 3 patches by cehosos, who did not
agree to the relicensing:
240b743ebd: --field-dominance
e32cbbf7dc: reinit VO if aspect ratio changes
306f6243fd: use container aspect if codec aspect unset (?)
The first patch is pretty clearly still in the current code, and needs
to be disabled for LGPL.
The functionality of the second patch is still active, but implemented
completely different, and as part of general frame parameter changes (at
the time of the patch, MPlayer already reinitialized the VO on frame
size and pixel format changes - all this was merged into a single check
for changing image parameters).
The third patch makes me a bit more uncomfortable. It appears the code
was moved to dec_video.c in de68b8f23c, and further changed in
82f0d373, 0a0bb905, and bf13bd0d. You could claim that cehoyos'
copyright still sticks. Fortunately, we implement alternative aspect
detection, which is simpler and probably preferable, and which arguably
contains none of the original code and logic, and thus should be fully
safe.
While I don't know if cehoyos' copyright actually still applies, I'm
more comfortable with making the code GPL-only for now. Also change the
default to use the (in future) plain LGPL code, and deprecate the one
associated with the GPL code, so we can eventually remove the GPL code.
But it's also possible we decide that the copyright doesn't apply, and
undo the deprecation and GPL guards.
I expect that users won't notice anything. If you ask me, the old aspect
method was probably an accidental bug instead of intentional behavior.
Although, the new aspect method was broken too, so I had to fix it.
image_writer.c has code originating from vf_screenshot.c, vo_jpeg.c, and
potentially others. vo_image.c is based on a bunch of those VOs as well,
and the intention was to replace them with a single codebase.
vo_tga.c was written by someone who was not or not could be contacted,
but it doesn't matter anyway, as no code from that initial patch was
used.
One rather old patch (57f77bb41a) reordered by libjpeg patch API calls,
and the author of the patch was not contacted. But at least with the
smoothing_factor override removed, this pretty much exactly corresponds
to the official libjpeg API example (and might even reflect a change to
those - didn't dig deeper). This removes the -jpeg-smooth option. While
we're at it, remove all the other dropped jpeg options from the manpage
(which was forgotten in past changes).
It was an attempt to move some MPlayer filters (which were removed from
mpv) to external, loadable filters. That worked well, but then the
MPlayer filters were ported to libavfilter (independently), so they're
available again. Also there is a more widely supported and more advanced
loadable filter system supported by mpv: vapoursynth.
In conclusion, vf_dlopen is not useful anymore, confusing, and requires
quite a bit of code (and probably wouldn't survive the rewrite of the
mpv video filter chain, which has to come at some point). It has some
implicit dependencies on internal conventions, like possibly the format
names dropped in the previous commit.
We also deprecated it last release. Drop it.
Before this, options with co->data==NULL (i.e. no storage) were not
added to the bridge (except alias options). There are a few options
which might make sense to allow via the bridge ("profile" and
"include"). So allow them.
In command_init(), we merely remove the co->data check, the rest of the
diff is due to switching the if/else branches for convenience.
We also must explicitly error on M_PROPERTY_GET if co->data==NULL. All
other cases check it in some way.
Explicitly exclude options from the property bridge, which would be
added due this, and the result would be pointless.
Implements JS with almost identical API to the Lua support.
Key differences from Lua:
- The global mp, mp.msg and mp.utils are always available.
- Instead of returning x, error, return x and expose mp.last_error().
- Timers are JS standard set/clear Timeout/Interval.
- Supports CommonJS modules/require.
- Added at mp.utils: getenv, read_file, write_file and few more.
- Global print and dump (expand objects) functions.
- mp.options currently not supported.
See DOCS/man/javascript.rst for more details.
Not all tools apply line breaking. Which is probably good, because you
wouldn't be able to put lines in there which should _not_ be broken.
Don't ask me whether there is an "official" git convention for this.
Sometime earlier, "sub-ass-style-override" was renamed to
"sub-ass-override". The option's documentation was updated to support
this, but not the documentation for the hotkey that toggles this option.
This commit updates the keyboard shortcut documentation to fix that.
Signed-off-by: wm4 <wm4@nowhere>
I call it `mobius` because apparently the form f(x) = (cx+a)/(dx+b) is
called a Möbius transform, which is the algorithm this is based on. In
the extremes it becomes `reinhard` (param=0.0 and `clip` (param=1.0),
smoothly transitioning between the two depending on the parameter.
This is a useful tone mapping algorithm since the tunable mobius
transform allows the user to decide the trade-off between color accuracy
and detail preservation on a continuous scale. The default of 0.3 is
already far more accurate than `reinhard` while also being reasonably
good at preserving highlights, without suffering from the overall
brightness drop and color distortion of `hable`.
For these reasons, make this the new default. Also expand and improve
the documentation for these tone mapping functions.
List of changes:
1. Rename `signfs` to `scale`, to better match what it actually does
(force --sub-scale to apply to ASS subtitles), and fix the blatantly
wrong documentation (it actually specifically does *not* apply to
signs)
2. Rename `--sub-ass-style-override` to `--sub-ass-override` to help
reduce confusion between it and `--sub-ass-force-style`, as well as
pointing out that it doesn't necessarily actually override styles.
(The new `scale` option, for example, only sets
ASS_OVERRIDE_BIT_FONT_SIZE, but not ASS_OVERRIDE_BIT_STYLE)
3. Mention that `--sub-ass-override` is generally sort of smart about
only overriding dialog, not signs.
In a multi GPU scenario, it may be desirable to use different GPUs
for decode and display responsibilities. For example, if a secondary
GPU has better video decoding capabilities.
In such a scenario, we need to initialise a separate context for each
GPU, and use the display context in hwdec_cuda, while passing the
decode context to avcodec.
Once that's done, the actually hand-off between the two GPUs is
transparent to us (It happens during the cuMemcpy2D operation which
copies the decoded frame from a cuda buffer to the OpenGL texture).
In the end, the bulk of the work is around introducing a new
configuration option to specify the decode device.
Update man page for fonts.conf and subfont.ttf. These are two
undocumented features in mpv. They were only hardcoded into sub/ass_mp.c
and could not be found anywhere else in the entire codebase. Git log
reveals that fonts.conf was added in 2013 while subfont.ttf was brought
in by a ancient patch of mplayer in 2002...
These are two quite useful undocumented features when you do not want to
mess up with global fonts.conf to include more fonts.
Also document ~/.config/mpv/fonts/ directory and suggest using fonts.conf
to include additional fonts.
mpv reads all files in ~/.config/mpv/fonts/ directory into memory. If
there are a lot of fonts in that directory, mpv would use a lot of
memory. Using ~/.config/mpv/fonts.conf to include additional fonts is
more memory-efficent.
af_volume is deprecated, and so are its replaygain sub-options. To make
it possible to use replaygain without deprecated options (and of course
to make it available at all after af_volume is dropped), reintroduce
them as top-level options.
This also means that they are easily changeable at runtime by using them
as properties. Change the "volume" property to use the new update
mechanism as well.
We don't actually bother sharing the implementation between new and
deprecated mechanisms, as the deprecated one will simply be deleted.
For the from_dB() functions, we mention anders' copyright, although I'm
not sure if a mere formula is copyrightable. This will have to be
determined later.
This whole change is mostly untested. Our distributed human CI will take
care of it.
It's all explained in the DOCS changes. Although this option was always
kind of obscure and pointless. Until it is removed, the only reason for
setting it would be to raise the static default limit, so change its
default to INT_MAX so that it does nothing by default.
Instead of pausing if --keep-open is active, stop
at end but continue playing if seeking backwards.
And then stop again when end is reached.
Signed-off-by: wm4 <wm4@nowhere>
Over the PR, the option was renamed, and the manpage additions were
slightly changed/enhanced.
Also "announce" the plans to undeprecate it with changed semantics
later. The deprecation period is needed to warn script authors and
client API users (etc.) of the change.
This is done because everyone seems to expect --loop to loop the current
file, not the playlist. Even in cases when only 1 file is on the
playlist, the --loop-file semantics seem to be preferred.
Mostly because of ANGLE (sadly).
The implementation became unpleasantly big, but at least it's relatively
self-contained.
I'm not sure to what degree shaders from different drivers are
compatible as in whether a driver would randomly misbehave if it's fed
a binary created by another driver. The useless binayFormat parameter
won't help it, as they can probably easily clash. As usual, OpenGL is
pretty shit here.
Now you can for example do "--vf=hue=h=60" - there is no "hue" filter in
mpv, so libavfilter's will be used.
This has certain caveats (see manpage).
The point of this is providing a relatively smooth transition path to
removing our own filter stuff.
The plan is to nuke the custom filter chain completely. It's not clear
what will happen to the still needed builtin filters (mostly hardware
deinterlacing and vf_vapoursynth). Most likely we'll replace them with
different filter chain concept (whose main purpose will be providing
builtin things and bridging to libavfilter).
The undocumented "warn" options are there to disable deprecation
warnings when the player inserts filter automatically.
The same will be done to audio filters, at a later point.
And also change input.conf to make all screenshots async. (Except the
every-frame mode, which always uses synchronous mode and ignores the
flag.) By default, the "screenshot" command is still asynchronous,
because scripts etc. might depend on this behavior.
This is only partially async. The code for determining the filename is
still always run synchronously. Only encoding the screenshot and writing
it to disk is asynchronous. We explicitly document the exact behavior as
undefined, so it can be changed any time.
Some of this is a bit messy, because I wanted to avoid duplicating the
message display code between sync and async mode. In async mode, this is
called from a worker thread, which is not safe because showing a message
accesses the thread-unsafe OSD code. So the core has to be locked during
this, which implies accessing the core and all that. So the code has
weird locking calls, and we need to do core destruction in a more
"controlled" manner (thus the outstanding_async field).
(What I'd really want would be the OSD simply showing log messages
instead.)
This is pretty untested, so expect bugs.
Fixes#4250.
Obviously, this has no effect on commands which do not support this
explicitly. A later commit will enable this for screenshots.
Also add some wording on mpv_command_async(), which has nothing to do
with this. Having a more elegant, unified behavior would be nice. But
the API function was not created for this - it's merely for running
commands _synchronously_ on the core, but without blocking the client
API caller (if the API user consistently uses only async functions).