1
mirror of https://github.com/mpv-player/mpv synced 2024-11-18 21:16:10 +01:00
Commit Graph

46183 Commits

Author SHA1 Message Date
Marco Migliori
3a2bc158bb vo_drm: make the osd as large as the screen
Before this commit, the drm vo drew the osd over the scaled image, and
then copied the result onto the framebuffer, shifted. This made the
frame centered, but forced the osd to be only as large as the image.
This was inconsistent with other vo's, covered the image with the
progress indicator even when a black band was at the top of the screen,
made the progress indicator wrap on narrow videos, etc.

The change is to always use an image as large as the screen. The frame
is copied scaled and shifted to it, and the osd drawn over it. The
result is finally copied to the framebuffer without any shift, since it
is already as large as it.

Technically, cur_frame is an image as large as the screen and
cur_frame_cropped is a dummy reference to it, cropped to the size of
the scaled video. This way, copying the scaled image to
cur_frame_cropped positions the image in the right place in cur_frame,
which can then have the osd added to it and copied to the framebuffer.
2018-02-11 17:51:15 -08:00
wm4
9f595f3a80 vo_gpu: make screenshots use the GL renderer
Using the GL renderer for color conversion will make sure screenshots
will use the same conversion as normal video rendering. It can do this
for all types of screenshots.

The logic when to write 16 bit PNGs changes. To approximate the old
behavior, we decide by looking whether the source video format has more
than 8 bits per component. We apply this logic even for window
screenshots. Also, 16 bit PNGs now always include an unused alpha
channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no
RGB064. RGB48 is 3 bytes and usually not supported by GPUs for
rendering, so we have to use RGBA64, which forces an alpha channel.

Will break for users who use --target-trc and similar options.

I considered creating a new gl_video context, but it could double GPU
memory use, so I didn't.

This uses FBOs instead of glGetTexImage(), because that increases the
chance it could work on GLES (e.g. ANGLE). Untested. No support for the
Vulkan and D3D11 backends yet.

Fixes #5498. Also fixes #5240, because the code for reading back is not
used with the new code path.
2018-02-11 17:45:51 -08:00
wm4
7b1e73139f vo_gpu: add internal ability to skip osd/subs for rendering
Needed for the following commit.
2018-02-11 17:45:51 -08:00
wm4
bff8cfe3f0 vo_gpu: use blit() only if target ra_tex supports it
Even if RA_CAP_BLIT is set, this might just not be enabled for the
target ra_tex.
2018-02-11 17:45:51 -08:00
Niklas Haas
ff08df5bb1 vo_gpu: add memory barrier on the HDR peak detection
This can cause the peak detection state to be inconsistent in rare
cases, which might explain the issues when taking screenshots in #5499.
2018-02-11 16:45:20 -08:00
Niklas Haas
4e7f4f10ce vo_gpu: correctly infer HDR peak detection support
The re-ordering of commits e3d93fd and 0870859 ended up swallowing the
change which made the HDR tone mapping algorithm actually check for
RA_CAP_NUM_GROUPS support.
2018-02-11 16:45:20 -08:00
Niklas Haas
4c2edecd7d vo_gpu: refactor HDR peak detection algorithm
The major changes are as follows:

1. Use `uint32_t` instead of `unsigned int` for the SSBO size
   calculation. This doesn't really matter, since a too-big buffer will
   still work just fine, but since `uint` is a 32-bit integer by
   definition this is the correct way to do it.

2. Pre-divide the frame_sum by the num_wg immediately at the end of a
   frame. This change was made to prevent overflow. At 4K screen size,
   this code is currently already very at risk of overflow, especially
   once I started playing with longer averaging sizes. Pre-dividing this
   out makes it just about fit into 32-bit even for worst-case PQ
   content. (It's technically also faster and easier this way, so I
   should have done it to begin with). Rename `frame_sum` to `frame_avg`
   to clearly signal the change in semantics.

3. Implement a scene transition detection algorithm. This basically
   compares the current frame's average brightness against the
   (averaged) value of the past frames. If it exceeds a threshold, which
   I experimentally configured, we reset the peak detection SSBO's state
   immediately - so that it just contains the current frame. This
   prevents annoying "eye adaptation"-like effects on scene transitions.

4. As a result of the previous change, we can now use a much larger
   buffer size by default, which results in a more stable and less
   flickery result. I experimented with values between 20 and 256 and
   settled on the new value of 64. (I also switched to a power-of-2
   array size, because I like powers of two)
2018-02-11 16:45:20 -08:00
Ricardo Constantino
20df21746a
appveyor: use undocumented --ask to force yes for all questions
Also, remove progressbar, just spammy for reason.
2018-02-11 14:16:50 +00:00
Zehua Chen
000a0e2775
player: correctly set track information on adding external files
Before this commit, auto_loaded and lang were only set for the first
track in auto-loaded external files. Likewise, for the title and
lang arguments to the sub-add and audio-add commands.

Fixes #5432
2018-02-10 06:50:32 -08:00
Rostislav Pehlivanov
6161cfd781 wayland_common: fix idle_inhibitor protocol segfault
The pointer is used as a state and wasn't zeroed after seeks.
2018-02-09 21:16:14 +02:00
LongChair
b01623e0d2 drmprime interop : Add frames triple buffering
Currently using the drmprime interop with external mpv intgration can lead
to rendering issues because the current frame is being released too early.

Typically using this with Qt results in one frame shift because Qt
will do waitforvsync and swap, rather than swap and waitforvsync.
This leads to tearing as the frambuffer is released while being
displayed on screen.

In order to avoid releasing the framebuffer that is displayed, We keep
the framebuffer alive for one more frame with triple buffering to make
sure that whatever rendering process is used, the framebuffer will not
be released when it's still on screen.

This was tested on RockChip Rock64
2018-02-07 22:40:30 -08:00
wm4
9282a34fbf vd_lavc: fix stall with some uses of --hwdec=copy
Also a regression of the filter change. The new code is more picky about
EOF states, and it turns out the weird delay queue (used with some hwdec
copy back modes only) accidentally dropped an EOF event. It reset the
avctx before the delay queue was drained, which meant it never returned
the expected AVERROR_EOF status code.

Also don't signal EOF when copy back fails. It should just try to
continue until fallback is performed.
2018-02-05 23:34:42 -08:00
Niklas Haas
e3d93fde2f vo_gpu: port HDR tone mapping algorithm from libplacebo
The current peak detection algorithm was very bugged (which contributed
to the excessive cross-frame flicker without long normalization) and
also didn't take into account the frame average brightness level.

The new algorithm both takes into account frame average brightness (in
addition to peak brightness), and also computes the values in a more
stable/correct way. (The old path was basically undefined behavior)

In addition to improving the algorithm, we also switch to hable tone
mapping by default, and try to enable peak computation automatically
whever possible (compute shaders + SSBOs supported). We also make the
desaturation milder, after extensive testing during libplacebo
development.

I also had to compensate a bit for the representational differences
between mpv and libplacebo (libplacebo treats 1.0 as the reference peak,
but mpv treats it as the nominal peak), but it shouldn't have caused any
problems.

This is still not quite the same as libplacebo, since libplacebo also
allows tagging the desired scene average brightness on the output, and
it also supports reading the scene average brightness from static
metadata (MaxFALL) where available. But those changes are a bit more
involved. It's possible we could also read this from metadata in the
future, but we have problems communicating with AVFrames as it is and I
don't want to touch the mpv colorimetry structs for the time being.
2018-02-05 23:11:18 -08:00
Niklas Haas
0870859e3d vo_gpu: add RA_CAP for gl_NumWorkGroups
SPIRV-Cross doesn't support this for the time being. It's possible this
could go away again at a later date.
2018-02-05 23:11:18 -08:00
Niklas Haas
5997248505 vo_gpu: vulkan: correctly enable textureGatherOffset
This also requires a vulkan feature / SPIR-V capability to function
2018-02-05 02:49:03 -08:00
Niklas Haas
f151ac57cb vo_gpu: vulkan: don't issue queries for unused timers
The vulkan validation layers warn you if you try requesting a query
result from a timer that hasn't even been started yet, so we have to do
some extra bit of work to keep track of which indices we've seen so far,
and avoid the queries on them.
2018-02-05 02:49:03 -08:00
Niklas Haas
f92e45bb8c vo_gpu: vulkan: try enabling required features
Instead of enabling every feature under the sun, make an effort to just
whitelist the ones we actually might use. Turns out the extended storage
format support is needed for some of the storage formats we use, in
particular rgba16.
2018-02-05 02:49:03 -08:00
Niklas Haas
92778873ad vo_gpu: vulkan: add missing buffer barrier fields
These were accidentally omitted.
2018-02-05 02:49:03 -08:00
wm4
b79190561f f_decoder_wrapper: fix log message incorrect for audio
This code is used by both video and audio, so the text should not talk
about video.
2018-02-05 02:47:14 -08:00
wm4
09af5760bb f_demux_in: give it a slightly better filter name
Matters for logging.
2018-02-05 02:47:14 -08:00
wm4
2f6dc93276 filter: don't randomly lose async wakeup notifications
Another "what was I thinking" thing - destroying filters explicitly
skipped async wakeups for no reason. These were notifications for
filters that are not going to be destroyed too, and so their wakeup will
be lost, leading to stalled playback. This is completely unnecessary and
the special code can be removed.

Fixes #5488. (This case destroyed all audio filters due to AO init
failure, which could make clear out the f_demux_in.c wakeup for video,
and "freeze" playback.)
2018-02-05 02:47:14 -08:00
wm4
beb8d27912 vd_lavc: fix recently broken hardware decode fallback
This is a dataflow issue caused by the filters change. When the fallback
happens, vd_lavc does not return a frame, but also does not accept a new
packet, which confuses lavc_process(). Fix this by immediately retrying
to feed the buffered packet and decode a frame on fallback.

Fixes #5489.
2018-02-04 16:24:17 -08:00
wm4
59f9547fb5 vf_vapoursynth: always keep input frame array filled
In theory (and practice), this is not needed, because the VS filter get
frame callback will cause the process function to be called again if
there's not enough data. But it's still a bit weird to just add one more
frame on each iteration, so make it cleaner and make it request frames
until the input array is full.
2018-02-03 14:51:33 -08:00
wm4
e34c5dc17c vf_vapoursynth: fix locking
This was obviously nonsense, and a previous "fix" to this code was
nonsense too. What is really needed here is temporarily dropping the
lock while calling destroy_vs()/reinit_vs().

Fixes #5470.
2018-02-03 14:51:33 -08:00
Ilya Tumaykin
f4f24c105f
tests: stop comparing floats against DBL_EPSILON, use FLT_EPSILON
Fixes #5253.
2018-02-03 13:56:08 -08:00
wm4
d7db42d27f
swresample: minor simplification
Cosmetic and no change in behavior. At least I think this looks simpler.
2018-02-03 05:01:34 -08:00
wm4
3d4071e6e5
swresample: remove unnecessary request for new input
We called mp_pin_out_request_data() if there was input _and_ output.
This is not how it should be: we should request new input only if output
was requested, but we could not produce any output.

On the other hand, the upper half of the process() function will request
new input if output is required, but all input was consumed. But this
requires calling mp_filter_internal_mark_progress(), as otherwise the
general filter logic would not know that we can continue.
2018-02-03 05:01:34 -08:00
wm4
87d8f292f5
swresample: actually reinit resampler on large speed changes
If the speed is changed by a large amount, we need to effectively change
the output rate by a large amount, and swr_set_compensation() is
apparently not designed to handle such large changes well. So it's
better to reinitialize the resampler on all large changes.

Also, strictly reinitialize the resampler if the rate changes, otherwise
it could happen that libavresample (which does not automatically
initialize resampling if avresample_set_compensation() is used) would
never apply speed changes properly.

Also document some conditions better that handle corner cases (remove
the inline condition from the if gating the compensation code).

It also appears that we crashed with very large compensation ratios
(when raising audio speed quickly by keeping the "[" key down), and this
commit accidentally mitigates it by not allowing large compensation.
2018-02-03 05:01:33 -08:00
wm4
07c54d8c5c
loadfile: make --lavfi-complex runtime changes more flexible
Setting lavfi-complex at runtime will now forcefully reselect the tracks
as needed, even if it was a "proper" track selection via --aid or --vid.
Before this commit, it just failed and complained that the VO/AO was
already "used".

Requested.
2018-02-03 05:01:33 -08:00
wm4
42125844f9
loadfile: initialize decoders after outputs for --lavfi-complex
This makes it actually somewhat simpler, and doesn't have any
disadvantages. It should also make some new features easier.

Mostly just moves code around.
2018-02-03 05:01:33 -08:00
wm4
2e8bb48ae8
loadfile: fix crash in some cases of setting --lavfi-complex at runtime
The somewhat confusing thing is that many filters (including track->dec)
have a public struct, but to free them, you need to free the mp_filter
pointer itself (track->dec->f). The assignment wrote to a dangling
pointer, instead of removing the dangling pointer.

(Other than that, this idiom is actually nice.)
2018-02-03 05:01:32 -08:00
wm4
34fe10e159
loadfile: remove minor unneeded things from --lavfi-complex setup 2018-02-03 05:01:32 -08:00
wm4
880ea467ca
f_output_chain: remove unused got_input_eof field
Was used by the player code before decoders were moved to filters.
2018-02-03 05:01:32 -08:00
wm4
c1b15ae437
vf_vapoursynth: fix obscure/impossible leak
Unknown frames were not freed properly. Although this doesn't really
happen anyway, because we're never going to feed audio frames to a video
filter chain. Since it's theoretically possible, and all other filters
handle this consistently, fix it anyway.
2018-02-03 05:01:31 -08:00
wm4
9224ae4fff
vf_vapoursynth: fix output colorspace flags and other attributes
Properly initialize the output frame parameters other than image format
and size. This includes colorspace hints. (We're still not reading them
back from VapourSynth if it sets them, though. Usually it doesn't
anyway.)
2018-02-03 05:01:31 -08:00
wm4
7393f4d320
vf_vapoursynth: fix potential deadlock on init failure
When VS initialization failed, it could hang due to forgetting to
release the mutex.
2018-02-03 05:01:30 -08:00
wm4
60d3327b0b
vf_vapoursynth: initialize start timestamp properly
VapourSynth can't pass through timestamps, only frame durations. So we
need to remember the timestamp of the very first frame passed to it.
This was accidentally set to 0 instead of NOPTS on init, so inserting
the filter during playback could show strange behavior.

Might be part of #5470.
2018-02-03 05:01:30 -08:00
wm4
a4392168f9
f_utils: fix leak in frame duration filter
vf_vapoursynth used this. Could cause a crash at VO uninit, if the
leaked frame was allocated via VO DR.
2018-02-03 05:01:30 -08:00
wm4
4f7a56e0c5
video: fix passing down FPS to vf_vapoursynth
To make this less of a mess, remove one of the redundant container_fps
fields.

Part of #5470.
2018-02-03 05:01:29 -08:00
wm4
7019e0dcfe
swresample: limit output size of audio frames
Similar to the previous commit, and for the same reasons. Unlike with
af_scaletempo, resampling does not have a natural frame size, so we set
an arbitrary size limit on output frames. We add a new option to control
this size, although I'm not sure whether anyone will use it, so mark it
for testing only.

Note that we go through some effort to avoid buffering data in
libswresample itself. One reason is that we might have to reinitialize
the resampler completely when changing speed, which drops the buffered
data. Another is that I'm not sure whether the resampler will do the
right thing when applying dynamic speed changes.
2018-02-03 05:01:29 -08:00
wm4
171ec0a7e4
af_scaletempo: output minimally sized audio frame
This helps the filter to adapt much faster to speed changes. Before this
commit, the filter just converted and output the full input frame, which
could cause problems with large input frames. This was made worse by
certain filters like dynaudnorm or loudnorm outputting pretty large
frames.

This commit changes the filter from trying to convert all input at once
to only outputting a single internally filtered frame. Internally, this
filter already output data in units of 60ms by default (controlled by
the "stride" sub-option), and concatenated as many output frames as
necessary to consume all input.

Behavior is still kind of bad when inserting the filter. This is because
the large frames can be buffered up after the insertion point, so the
speed change will be performed with a larger latency. The scaletempo
filter can't do anything against this, although it can be fixed by
inserting scaletempo as user filter as part of --af.
2018-02-03 05:01:29 -08:00
wm4
debc17663d
filter: add/use a convenience function
I guess this is generally useful for filters which buffer data
internally.
2018-02-03 05:01:28 -08:00
wm4
afb167cfd2
options: slightly improve filter help output for lavfi bridge
--vf=help will now list libavfilter filters, and e.g. --vf=yadif=help
will list libavfilter filter options.

The latter is rather bare, because the AVOption API is really awful
(holy shit how is it so bad), and would require us to handle _every_
option type manually.

Alternatively we could call av_opt_show2(), which ffmpeg uses for help
output in its CLI tools and which is much more detailed. But it's rather
foreign and forces output through av_log(), so I don't really want to
use it.
2018-02-03 05:00:52 -08:00
wm4
1742614505 options: pretty print default values with --list-options 2018-02-01 10:21:55 +01:00
wm4
8b3306924d codecs: remove unused family field
MPlayer used this to distinguish multiple decoder wrappers (such as
libavcodec vs. binary codec loader vs. builtin decoders). It lost
meaning in mpv as non-libavcodec things were dropped. Now it doesn't
serve any purpose anymore.

Parsing was removed quite a while ago, and the recent filter change
removed any use of the internal family field. Get rid of it.
2018-02-01 10:21:55 +01:00
wm4
4b567aeac8 manpage: clarify some --vf options
In particular, mention deprecated things.
2018-01-31 11:12:08 +01:00
wm4
a9f97b26d8 Revert "demux_mkv: remove remaining GPL code"
This reverts commit b7f90be567.

The author agreed to the relicensing now (if that code is affected by
the original copyright at all - that was the only line possibly left of
it).
2018-01-31 03:54:59 +01:00
wm4
e197ca3dd5 Copyright: fix missing word 2018-01-31 03:50:22 +01:00
wm4
7f3c7100d5 cue: strip quotes and leading whitespace from tags
If tags like TITLE have the whole parameter in " quotes, strip them.
Also remove the leading whitespace, since even with a single space it
was always included.

Fixes #5462.
2018-01-30 14:01:15 +01:00
Ricardo Constantino
eaa97daf65
ytdl_hook: pass http proxy to ffmpeg
FFmpeg only suppports http proxies and ignores it if
the resulting url is https. Also, no SOCKS.
Use it like `--ytdl-raw-options=proxy=[http://127.0.0.1:3128]` so
it doesn't confuse mpv because of the colons.

You need to pass it as an option because youtube-dl doesn't give
us the proxy.

Or just set `http_proxy` environment variable as recommended before.

Added example using -append, which doesn't need escaping.
2018-01-30 12:19:34 +00:00