1
mirror of https://github.com/mpv-player/mpv synced 2024-11-14 22:48:35 +01:00
Commit Graph

4562 Commits

Author SHA1 Message Date
Philip Langdale
901b3dddb0 wayland: implement minimize and maximize related VOCTRLs
We primarily care about pseudo-decorations for wayland, where
the compositor may not support server-side decorations. So let's
implement the minimize and maximize commands and return the
maximized window state.
2019-11-29 16:56:20 +08:00
Philip Langdale
c2bd3b1ecc command: add window-maximized and make window-minimized settable
If we want to implement window pseudo-decorations via OSC, we need a
way to tell the vo to minimize and maximize the window. Today, we have
minimized as a read-only property, and no property for maximized.

Let's made minimized settable and add a maximized property to go with
it. In turn, that requires us to add VOCTRLs for minimizing or
maximizing a window, and an additional WIN_STATE to indicate a
maximized window.
2019-11-29 16:56:20 +08:00
Philip Langdale
f3c2f1f6aa wayland: restore window geometry after un-maximize
At least with gnome-shell (I know, I know), the compositor does
not provide the old window size when leaving the maximized state.
Instead, we get a toplevel_config event with a 0x0 size and no
additional states.

Today, we already save the window geometry to restore it when leaving
the fullscreen state, so we just need a small change for it to
kick in for leaving the maximized state. If I read this correctly,
we'll still respect the size passed by a compositor that actually
provides the old size.
2019-11-29 16:56:20 +08:00
Philip Langdale
5e3eb03ecf wayland: make the edge grab zone width user configurable
Rather than hard-coding the edge grab zone width, we can make it
user configurable. It seems worthwhile to have separate configs
for pointer and touch usage as the defaults should be different,
and a user might have both input methods in use.
2019-11-29 16:56:20 +08:00
Philip Langdale
4c179a27c2 wayland: add grab zone for resizing window with mouse
Today, we support resizing wayland windows when we detect a touch
event in a defined grab zone. As part of implementing
pseudo-decorations, we should have equivalent functionality for
mouse input. And if we detect support for actual decorations we
will not activate the grab zone as the decorations will provide this.
2019-11-29 16:56:20 +08:00
wm4
db3b5c9309 x11_common: don't use vo->opts directly
Use x11->opts instead of vo->opts. This doesn't matter currently, and
x11->opts is actually set to vo->opts. However, there's a chance that
either option access changes, or that the way backends integrate with
struct vo changes. This is just a preemptive change to make this less of
a mess, and it's generally a good idea to reduce accesses to struct vo
anyway.
2019-11-27 20:30:13 +01:00
wm4
c26e80d0fd command: shuffle some crap around
This is preparation to get rid of the option-to-property bridge
(mp_on_set_option). This is a pretty insane thing that redirects
accesses to options to properties. It was needed in the ever ongoing
transition from something to... something else.

A good example for the need of this bridge is applying profiles at
runtime. This obviously goes through the config parser, but should also
make all changes effective, for which traditionally the property layer
is used.

There isn't much left that needs this bridge. This commit changes a
bunch of options (which also have a property implementation) to use
option change notifications instead. Many of the properties are still
left, but perform unrelated functions like OSD formatting.

This should be mostly compatible. There may be some subtle behavior
changes. For example, "hwdec" and "record-file" do not check for changes
anymore before applying them, so writing the current value to them
suddenly does something, while it was ignored before.

DVB changes untested, but should work.
2019-11-25 00:26:36 +01:00
Niklas Haas
b31f2f6cb9 vo_gpu: fix infinite scaler reinit spam
Handling the window with this function makes no sense, since windows
and kernels are not the same thing and don't share the same option list.

The only reason it's done is to make sure the char* points at the static
string rather than the dynamically allocated one, which we can do
manually in this function. Rewrite a bit for clarity/quality.
2019-11-23 11:46:52 +01:00
Michael Forney
78963d1e66 video/out/bitmap_packer: Avoid empty initializer list 2019-11-18 16:50:21 +01:00
Michael Forney
235fabcfae video/out/vo_tct: Use octal escape sequence instead of non-standard \e 2019-11-18 16:50:21 +01:00
Michael Forney
bea582f383 video/out/gpu: Remove stray top-level ';' 2019-11-18 16:50:21 +01:00
Philip Langdale
a714ab0601 vo_gpu: hwdec_cuda: Reduce message level of errors while probing
We should only be printing errors that occur when not probing, to
avoid creating the impression that something is wrong - and errors
during probing isn't a problem.
2019-11-17 09:44:32 -08:00
Philip Langdale
ba370e9599 vo_gpu: context_glx: Add X11 native resource
Surprisingly, we've managed to get this far without context_glx ever
adding the X11 display as a native resource. But with the recent change
to attempt to enable vdpau when using EGL, the hwdec now requires the
display to be added. So let's add it.
2019-11-16 15:35:32 -08:00
Dudemanguy
0d8a6c6984 wayland: use eglGetPlatformDisplay()
See aacc194. The same logic all applies to Wayland. In fact, we already
require EGL 1.5 for wayland anyway, so it's better to do it right.
2019-11-16 14:23:07 -06:00
wm4
aacc1942fb x11: require EGL 1.5 and use eglGetPlatformDisplay()
eglGetPlatform() is a broken API, since it takes a windowing specific
argument, yet is supposed to work for multiple APIs at the same time. On
Linux, it can take both a X11 "Display" and a "wl_display". Obviously
there is no way to specify what kind of display the argument is (it's
just a void*).

Mesa has _eglNativePlatformDetectNativeDisplay, which does funny stuff
to try to guess the display type, including trying to call mincore() to
determine whether the pointer can be accessed at all. I guess this
recently accidentally broke (as a bug), but on the other hand, maybe
it's time to do this properly.

The fix is using eglGetPlaformDisplay(). This requires EGL 1.5, plus
Mesa needs to support the associated platform extension
(EGL_KHR_platform_x11).

Since I see no reasonable way to do this in a compatible way, just
require that EGL 1.5 is available. The problem is that EGL 1.4 seems to
require you to create a display to query EGL version and extension, and
you have a chicken-and-egg problem. It's very stupid. Maybe you could
jump through some more hoops to get something compatible, but fuck that.
Users on "too old" Mesa will fall back to GLX (which we keep around for
a regrettable company known by the name of Nvidia).

I think Wayland and GBM should do the same. They're sufficiently
bleeding-edge that you can expect them to have EGL 1.5. On the other
hand, the cursed RPI code will have to stay with a eglGetDisplay().

Speculative fix for #7154.

(Rant about EGL follows. Actually I deleted it.)
2019-11-16 20:55:03 +01:00
wm4
73c3dc0a7b vo_gpu: sync duplicated condition on peak computation
pass_color_map() (in video_shaders.c) and pass_colormanage() (video.c)
both duplicate the condition on whether to do peak computation. Peak
computation requires a compute shader, so if the duplicated conditions
don't match, video_shaders.c will generate a compute shader, but video.c
will try to run it as fragment shader. This leads to a "blue screen".

This can be reproduced by playing a HDTV video with --target-peak=99.

It's not clear how to fix this. Should pass_tone_map() be only invoked
if mp_trc_is_hdr() == true (what pass_colormanage() uses to decide
whether to enable peak computation), or should pass_colormanage() just
tell pass_color_map() to skip peak computation? Decide for the latter,
as it's more robust.

Even if not correct, at least it gets rid of the blue shit.

Fixes: #7149
2019-11-16 19:02:36 +01:00
dudemanguy
dcc3c2eb38 wayland: use hidpi-window-scale option 2019-11-12 01:00:08 +00:00
Philip Sequeira
85aa9635e0 build: fix compilation conditions for vaapi interop inits
This makes the condition for including each init match the condition for
compiling the file that defines it.

It's possible to e.g. HAVE_GL and HAVE_VAAPI without HAVE_VAAPI_EGL,
which resulted in "undefined reference to `vaapi_gl_init'" with the old
code.
2019-11-10 20:59:17 -08:00
wm4
35de8ea0a8 vo_gpu: yuv alpha is always full range
Probably. It's not like these pixel formats are formally specified -
FFmpeg added them because _some_ file format or decoder supports it, and
while that format/codec may define it precisely, the pixel format is
sort of disconnected and just a FFmpeg thing.

In any case, the yuva sample I had at hand uses the full range the
component data type can provide. The old code used the same "shifted"
range as for Y/U/V components, which must have been wrong.

This will not work correctly for packed YUVA formats, but fortunately
they matter even less.
2019-11-09 23:56:44 +01:00
wm4
cd8fd4b788 vo_gpu: context_x11egl: check eglGetConfigAttrib() for errors
Not sure why it assumes that it always succeeds (although generally it
won't fail).
2019-11-08 21:22:49 +01:00
wm4
bcfabf40a4 img_format: remove some unneeded alpha flag handling
Don't know what this was for, but the result doesn't change.
2019-11-08 21:22:49 +01:00
wm4
1edb3d061b test: add dumping of img_format metadata
This is fragile enough that it warrants getting "monitored".

This takes the commented test program code from img_format.c, makes it
output to a text file, and then compares it to a "ref" file stored in
git.

Originally, I wanted to do the comparison etc. in a shell or Python
script. But why not do it in C. So mpv calls /usr/bin/diff as a
sub-process now.

This test will start producing different output if FFmpeg adds new pixel
formats or pixel format flags, or if mpv adds new IMGFMT (either aliases
to FFmpeg formats or own formats). That is unavoidable, and requires
manual inspection of the results, and then updating the ref file.

The changes in the non-test code are to guarantee that the format ID
conversion functions only translate between valid IDs.
2019-11-08 21:22:49 +01:00
wm4
1c8d2246bf vo_gpu: vdpau actually works under EGL
The use of glXGetCurrentDisplay() restricted this to the GLX backend.
But actually it works under EGL as well. Removing the GLX-specific call
and using the general mpv-internal method to get the X "Display" makes
it work in mpv.

I didn't know this. Nvidia didn't list this as extension in the EGL
context when I still used their GPUs.

Note that this might in theory break use of vdpau in some libmpv clients
using the render API. But only if MPV_RENDER_PARAM_X11_DISPLAY is not
used, and they relied on mpv using glXGetCurrentDisplay(). EGL does not
provide such an API, and hwdec_vaapi.c also uses what hwdec_vdpau.c uses
now. Considering that vaapi is preferable these days, it's not bad at
all if these clients get "broken". They can be easily fixed by passing
the display to mpv correctly.
2019-11-07 22:53:13 +01:00
wm4
afbddbdf8f DOCS/contribute.md, zimg: remove 2 instances of an extraneous "s" 2019-11-07 22:53:13 +01:00
wm4
8a0929973d vo_gpu: unconditionally clear framebuffer on start of frame
For some reason, the first frame displayed on X11 with amdgpu and OpenGL
will be garbled. This is especially visible if the player starts,
displays a frame, but then still takes a while to properly start
playback.

With --interpolation, the behavior somehow changes (usually gets worse).
I'm not sure what exactly is going on, and the code in video.c is way
too abstruse. Maybe there is some slight possibility that a frame with
uncleared contents gets displayed, which somehow also corrupts another
frame that is displayed immediately after that.

If clear is unconditionally run, this somehow doesn't happen, and you
see a video frame. By any logic this shouldn't happen: a video frame
should always overwrite the background. So I can't exclude that this
isn't some sort of driver bug, or at least very obscure interaction.

Clearing should be practically free anyway, so always do it.

Fixes: #7105
2019-11-06 22:42:44 +01:00
wm4
98352362ea img_format: remove some unused format flags
They were used at some point, but then fell into disuse. In general,
these old flags are all a bit fuzzy, so it's a good idea to remove them
as much as possible.

The comment about MP_IMGFLAG_PAL isn't true anymore. The old meaning was
deprecated at some point, and the flag was removed from "pseudo
paletted" formats. I think mpv at one point changed its own flag from
AV_PIX_FMT_FLAG_PSEUDOPAL to AV_PIX_FMT_FLAG_PAL, when the former was
deprecated, and it became unnecessary to allocate a palette for
non-paletted formats. (The one who deprecated in FFmpeg was me, if you
wonder.)

MP_IMGFLAG_PLANAR was used in command.c, use a relatively similar flag
as replacement.
2019-11-03 23:18:35 +01:00
wm4
cfd6595386 vo_x11: accept zimg formats
This is slightly helpful for testing, and otherwise useless and without
consequence.

I'm not using the correct output format and using IMGFMT_RGB0 as
placeholder. This doesn't matter currently, as both sws and zimg support
this as output (and support any input for it). I'm doing this because
it's surprisingly tricky to get the correct output format at this point,
without digging deeper into x11 shit or refactoring parts of the VO. I
don't care enough about this.
2019-11-03 22:52:12 +01:00
wm4
a19571679f sws_utils: remove some unnecessary sws bug work around
Seems like this was needed in 2012. The comment indicates the bug was
fixed in ffmpeg git, so it's long gone.
2019-11-03 22:48:49 +01:00
wm4
67e17f1104 vd_lavc: don't keep packets for fallbacks if errors are tolerated
The user can raise the number of tolerated hardware decoding errors. On
the other hand, we have a static limit on packets that are "saved" for
fallback handling (and that's a good idea to avoid unbounded memory
usage). In this case, it could happen that the start of a file was fine
after a fallback, but after that buffered amount of data, it would
suddenly skip.

It's more useful to skip buffering entirely if the number of tolerated
decoding errors exceeds the fixed buffer.

(And also, I'm sure nobody gives a shit about this feature.)
2019-11-02 23:00:49 +01:00
wm4
0e1cfe0c42 vd_lavc: fix prepare_decoding() failure modes
prepare_decoding() returned a bool that was supposed to tell whether
decoding could work, or if something was fucked. After recent changes to
the decoder loop, this did not work anymore, and caused an endless loop.
Redo it, so it makes more sense. avctx being NULL (software fallback
initialization failed) now signals EOF. hwdec_failed needs to be handled
on send_packet() only, where it probably never happens anyway.

(Who was the idiot who made libavcodec have two entrypoints for
decoding? Oh right, it was me. PEBKAC.)
2019-11-02 22:44:39 +01:00
wm4
7bf31c51f2 vd_lavc: mention hw decoding if decoding fails in hwdec mode
Just so the user knows. Provides some context.
2019-11-02 22:38:08 +01:00
wm4
1bb726dedc vd_lavc: simplify fallback handling for full stream hw decoder
Shovel the code around to make the data flow slightly simpler (?). At
least there's only one send_packet function now. The old code had the
problem that send_packet() could be called even if there were queued
packets; due to sending the queued packets in the receive_frame
function, this should not happen anymore (the code checking for this
case in send_packet should normally never be called).

Untested with actual full stream hw decoders (none available here); I
created a test case by making hwaccel decoding fail.
2019-11-02 22:37:14 +01:00
wm4
dab588a4a2 vd_lavc: signal packet consumed in drop-all case
This is just a very special code path. This probably got stuck, now that
the previous commit returned the EAGAIN properly. Untested.
2019-11-02 21:38:19 +01:00
wm4
203fc6fe44 vd_lavc: change incorrect bool return type to int
Forgotten in commit 5d5fdb7. This failed to return the error code
properly. In particular, if the decoder rejected the packet, this was
not properly detected. Normally, this mattered only in specific cases.

Fixes: #7115
2019-11-02 21:35:48 +01:00
wm4
6eb0dc5447 zimg: support subsampled chroma with non-aligned image sizes 2019-11-02 18:49:17 +01:00
wm4
d26c1c6814 zinmg: stop using GBRP for RGB
Instead, use a YUV planar format. It doesn't matter, since we use the
format only internally and for "management" purposes. We're only
interested in the physical layout, not what colorspace FFmpeg "forcibly"
associates with it.

Also get rid of using the old and slightly sketchy mp_imgfmt_find()
function. Yep, the IMGFMT_RGB30 now "constructs" the planar format,
instead of using a pixfmt constant. Slightly inconvenient, tricky, and
fragile, but I like it, so bugger off.

This whole thing gets rid of some of the strange plane permutations that
were needed earlier.
2019-11-02 18:11:02 +01:00
wm4
20c1bd8a5e zimg: correct RGB30 order (probably)
According to the definition of the GL format, and the definition in
img_format.h, and the actual output by vo_gpu, the order of components
was probably wrong. It's exceedingly likely that the vo_drm format (for
which this was originally written) has the same layout, so this was
probably a bug from when the zimg wrapper code was refactored.
2019-11-02 17:52:13 +01:00
wm4
e58e650a97 video: mess with the filte chain to enable zimg IMGFMT_RGB30 output
This was too hardcoded to libswscale. In particular, IMGFMT_RGB30 output
is only possible with the zimg wrapper, so the context needs to be taken
into account (since this depends on the --sws-allow-zimg option
dynamically). This is still slightly risky, because zimg currently will
still fall back to swscale in some cases, such as when it refuses to
initialize the particular color conversion that is requested.
f_autoconvert.c could actually handle this better, but I'm tool fucking
lazy right now, and nobody cares anyway, so go away, OK?
2019-11-02 17:50:32 +01:00
wm4
3e660f6164 vo_gpu: opengl: add support for IMGFMT_RGB30
This integrates it as "special" format, with no alpha component, as the
equivalent IMGFMT_RGB30 isn't meant to contain any.

Nothing can produce this format in the video chain yet, so the next
commits are needed to make this actually work.
2019-11-02 17:46:46 +01:00
wm4
00838fe0c3 zimg: make --zimg-fast=yes default
This is mostly just because of the odd RGB default gamma issue, which
shouldn't have any real impact. This also sets allow_approximate_gamma,
which I hope is fine for normal use cases.
2019-11-02 02:22:16 +01:00
wm4
c9d685f5d6 zimg: pass through Y plane when repacking nv12
Normally, the Y plane can just be passed directly to zimg, and only the
chroma plane needs to be (de)interleaved. It still needs a copy if the Y
pointer is not aligned, though. (Whether this is actually a problem
depends on the CPU and probably zimg's compiler.)

This requires deciding per plane whether the plane should go through the
repack buffer or not. This logic is active in non-nv12 cases, because
not doing so would require extra code (maybe 2 lines or so).
repack_align is now always called, even if it's planar->planar with all
input aligned, but it won't actually do anything in that case. The
assumption is that zimg won't change behavior if you pass a callback
that does nothing versus passing NULL as callback.
2019-11-02 02:18:24 +01:00
wm4
3b56f82965 zimg: add semi-planar repacker
This is for formats like nv12 (including p010, nv24, etc.). Might be
important for hardware decoding. Previously, this would have forced a
libswscale fallback.

The genericism makes this only slightly more complicated. The main
complication is due to the fact that mixing planar and packed stuff is
insane (thanks, Nvidia).

P010 output will actually happily set any of the 6 bit "padding" LSB,
that are normally supposed to be 0 (for unpadded data there is P016).
Scaling happens with 16 bit precision. Not going to bother adding an
extra packer which zeros them out, or with shifting them in
packing/unpacking. Lets just hope nobody notices.
2019-11-02 01:10:22 +01:00
wm4
c3cee4b9ec img_format: add function to find image format by layout
This is similar to mp_imgfmt_find(), but probably a bit saner. Used by
the next commit. The previous commit is required to map this
unambiguously between all formats.
2019-11-02 01:02:54 +01:00
wm4
bdd1e1e7ec img_format: add mp_regular_imgfmt.forced_csp field
As the code comment says, this is needed to disambiguate FFmpeg formats.
This struct only describes the "physical" layout of a format, while
FFmpeg also attaches part of the colorspace information to the format.
2019-11-02 01:00:32 +01:00
wm4
337ccc50dc img_format: add more explanations to component_pad field
Weird shit. I thought this was a clever way to elegantly handle two
cases at once, but maybe it's just confusing.
2019-11-02 00:55:55 +01:00
wm4
4b3666329e zimg: fix out of bounds memory accesses due to broken zmask
We've set all planes to the same zmask. But for subsampled chroma, the
zmask obviously needs to be smaller. This could lead to out of bounds
memory read and write accesses.

Move the align repacker to a single function, since this is now more
convenient.
2019-11-02 00:48:04 +01:00
wm4
02cb44ac8b x11: reduce log level for relatively uninteresting things
Normally nobody cares about the WM detection stuff etc., so log this
only at debug log levels.
2019-11-01 01:40:22 +01:00
wm4
04b4155582 zimg: add more packers/unpackers
This probably covers all packed formats which have byte-aligned
component, no alpha, and no subsampling. Everything else needs more
imgfmt metadata, or something even more complicated. Alpha is primarily
not supported, because zimg requires a second scaler instance for it,
and handling packing/unpacking with it is an unacceptable mess.
2019-10-31 22:43:58 +01:00
wm4
a7230dfed0 sws_utils, zimg: destroy vo_x11 and vo_drm performance
Raise swscale and zimg default parameters. This restores screenshot
quality settings (maybe) unset in the commit before. Also expose some
more libswscale and zimg options.

Since these options are also used for VOs like x11 and drm, this will
make x11/drm/etc. much slower. For compensation, provide a profile that
sets the old option values: sw-fast. I'm also enabling zimg here, just
as an experiment.

The core problem is that we have a single set of command line options
which control the settings used for most swscale/zimg uses. This was
done in the previous commit. It cannot differentiate between the VOs,
which need to be realtime and may accept/require lower quality options,
and things like screenshots or vo_image, which can be slower, but should
not sacrifice quality by default.

Should this have two sets of options or something similar to do the
right thing depending on the code which calls libswscale? Maybe. Or
should I just ignore the problem, make it someone else's problem (users
who want to use software conversion VOs), provide a sub-optimal
solution, and call it a day? Definitely, sounds good, pushing to master,
goodbye.
2019-10-31 16:51:12 +01:00
wm4
2c43d2b75a screenshot, vo_image: use global swscale/zimg parameters
Lots of dumb crap to do... something. Instead of adding yet another dumb
helper, just use the main" sws_utils API in both callers. (Which,
unfortunately, has been duplicated for glorious webp screenshots,
despite the fact that webp is crap.)

Good part: can enable zimg for screenshots (as far as needed).
Bad part: uses "default" swscale parameters instead of HQ now.
2019-10-31 15:44:09 +01:00