Useful for libavfilter. Somewhat risky, because we can't ensure the
consistency of the unknown side data (but this is a general problem with
side data, and libavfilter filters will usually get it wrong too _if_
there are conflict cases).
Fixes#5569.
We must not create new references herem because mp_image_new_ref() is
called later, and actually creates new references (including doing
actual error checking). Blame C, not me.
the NSWindowButton enum was moved to be a member of NSWindow and renamed
to ButtonType in SDK 10.13. apparently that wasn't documented anywhere.
not even in the SDK changes Document and the official Documentations
makes it look like it was always like this. the old NSWindowButton enum
though is still around on SDK 10.13 or at least got a typealias. so we
will just use that.
The purpose of the new API is to make it useable with other APIs than
OpenGL, especially D3D11 and vulkan. In theory it's now possible to
support other vo_gpu backends, as well as backends that don't use the
vo_gpu code at all.
This also aims to get rid of the dumb mpv_get_sub_api() function. The
life cycle of the new mpv_render_context is a bit different from
mpv_opengl_cb_context, and you explicitly create/destroy the new
context, instead of calling init/uninit on an object returned by
mpv_get_sub_api().
In other to make the render API generic, it's annoyingly EGL style, and
requires you to pass in API-specific objects to generic functions. This
is to avoid explicit objects like the internal ra API has, because that
sounds more complicated and annoying for an API that's supposed to never
change.
The opengl_cb API will continue to exist for a bit longer, but
internally there are already a few tradeoffs, like reduced
thread-safety.
Mostly untested. Seems to work fine with mpc-qt.
when resizing async it's possible that the layer, and the underlying gl
surface, is stretched on an aspect ratio change. to prevent that we do
an atomic resize (resize and draw at the same time). usually max one
unique frame should be dropped but it's possible, depending on the
performance, that more are dropped.
the title bar is now within the window bounds instead of outside. same
as QuickTime Player. it supports several standard styles, two dark and
two light ones. additionally we have properly rounded corners now and
the borderless window also has the proper window shadow.
Also make the earliest supported macOS version 10.10.
Fixes#4789, #3944
By blocking the VT switcher signal in the VO thread we get less races
and other oddities.
This gets rid of tearing (at least for me) when VT switching with
--gpu-context=drm.
crtc_setup gets called on VT reacquire as well as during normal setup. When
called during VT reacquire p->front_buf might not be 0, so the maths was wrong,
and could cause array OOB errors. Use mathematically correct (for negative
numbers) modulo to always pick the farthest away buffer (should work
even for larger values of BUF_COUNT).
The VT switcher was being set up, but it was being neither polled nor
interrupted.
Insert wait_events and wakeup functions based on those from vo_drm,
and add return early in drm_egl_swap_buffers if p->active isn't set.
This should get basic VT switching working, however there will likely
still be some random glitches. Switching between mpv and X11/weston is
unlikely to work satisfactorily until we can solve the problems with
drmSetMaster and drmDropMaster.
This introduces the option --drm-format (currently used only by
context_drm_egl, vo_drm implementation is pending) which allows you to
pick between a xrgb8888 or a xrgb2101010 visual for --gpu-context=drm.
Requires a recent mesa (18.0.0_rc4 or later) to work.
This also fixes a bug when using --gpu-context=drm on a 30bpp-enabled
mesa (allow_rgb10_configs set to true). Previously it would've set up
an XRGB8888 format at the DRM/GBM level, while a 30bpp EGLConfig would
be picked, resulting in a garbled image.
even though the fullscreen animation has a shorter duration than the
system wide animation (space sliding effect) there are still cases where
it takes longer, eg performance issues (especially on init). furthermore
the final size of the animation is usually different than the actual
fullscreen size because of spect ratio differences. the actual resize to
fullscreen is done automatically by cocoa itself when the actual
transition to fullscreen happens (system event). so it could happen that
the last animation resize happened after the actual resize to fullscreen
leading to a wrongly sized frame after entering fullscreen. to prevent
this we cancel the animation when entering fullscreen, we always set the
proper frame size when in fullscreen and discard any other frame sizes,
and to prevent some performance problems on init we push entering
fullscreen to the end of the main queue to execute it when most of the
init routines are done.
Fixes#5525
on live resize, eg async resize, the layer's bounds size is not in sync
with the actual surface size. this led to a wrongly sized frame and a
perceived flicker. get and use the actual surface size instead.
Mobius isn't well-defined for sig_peak <= 1.0. We can solve this by just
soft-clamping sig_peak to 1.0. Although, in this case, we can just skip
tone mapping altogether since the limit of mobius as sig_peak -> 1.0 is
just a linear function.
Based on testing with real-world non-HDR BT.2020 clips, clipping the
color space looks better than attempting to gamut map using a tone
mapping shader that's (by now) optimized for HDR content.
If anything, we'd have to develop a separate gamut mapping shader that
works in LCh space.
When pixels are non-square, the appropriate value of vo->monitor_par is
necessary to determine the destination rectangle, which in turn tells
how to scale the video along the x and y axis. Before this commit, the
drm driver only used --monitorpixelaspect. For example, to play a video
with the right aspect on a 4:3 screen and 640:400 pixels,
--monitorpixelaspect=5:6 had to be given.
With this commit, vo->monitor_par is determined from the size of the
screen in pixels and the --monitoraspect parameter. The latter is
usually easier to determine than --monitorpixelaspect, since it is
simply the proportion between the width and the height of the screen,
in most cases 16:9 or 4:3. If --monitoraspect is not given,
--monitorpixelaspect is used if given, otherwise pixel aspect is
assumed 1:1.
This solves a number of problems simultaneously:
1. When outputting HLG, this allows tuning the OOTF based on the display
characteristics.
2. When outputting PQ or other HDR curves, this allows soft-limiting the
output brightness using the tone mapping algorithm.
3. When outputting SDR, this allows HDR-in-SDR style output, by
controlling the output brightness directly.
Closes#5521
The HLG OOTF is defined as a one-parameter family of OOTFs depending on
the display's peak luminance. With the preceding change to OOTF scale
and handling, we no longer have any issues with outputting values in
whatever signal range we need.
So as a result, it's easy for us to support a tunable OOTF which may
(drastically) alter the display brightness. In fact, this is also the
only correct way to do it, because the HLG appearance depends strongly
on the OOTF configuration. For the OOTF, we consult the mastering
display's tagging (via src.sig_peak). For the inverse OOTF, we consult
the output display's target peak.
The primary need for this change is the fact that the OOTF was
incorrectly scaled, due to the fact that the application of the OOTF can
itself change the required normalization peak. (Plus, an oversight in
pass_inverse_ootf meant we forgot to normalize at the end of it)
The linearize/delinearize functions still normalize the scale since it's
used in a number of places throughout gpu/video.c, but the color
management function now converts to absolute scale right away, instead
of in an awkward way inside the tone mapping branch. The OOTF functions
now work in absolute scale only.
In addition, minor changes have been made to the way normalization is
handled for tone mapping - we now divide out the dst_peak *after* peak
detection, in order to make the scale of the peak detection buffer
consistent even if the dst_peak were to (hypothetically) change
mid-stream. In theory, we could also do this for desaturation, but doing
the desaturation before tone mapping has the advantage of preserving
much more brightness than the other way around - and even mid-stream
changes are not that drastic here.
Finally, some preparation work has been done for allowing the user to
customize the `dst.sig_peak` in the future.
drawing off-screen failed because we didn't have a valid context. the
problem is we force off-screen drawing because the CAOpenGLLayer refuses
to draw anything while being off-screen. set the current context before
starting to draw anything off-screen.
Fixes#5530
Coverity complained about the redundant init of hratio etc. - just
remove that and merge declaration/init of these variables. Also the
first double cast in each expression is unnecessary.
the CVDisplayLinkSetOutputHandler function introduced with 10.11 is
broken on the very same version of the OS, which caused our render loop
never to start. fallback to the old display link callback on 10.11.
for reference the radar http://www.openradar.me/26640780Fixes#5527
There is now a better way. Reading the font framebuffer was always a
hack. The new code via VOCTRL_SCREENSHOT renders it into a FBO, which
does not come with the disadvantages of reading the front buffer (like
not being supported by GLES, possibly black regions due to overlapping
windows on some systems).
For now keep VOCTRL_SCREENSHOT_WIN on the VO level, because there are
still some lesser VOs and backends that use it.
This should be helpful for the new OSX Cocoa backend, which uses
opengl-cb internally. Since it comes with a behavior change that could
possibly interfere with libmpv/opengl_cb users, we mark it as explicit
API change.
This includes codec/muxer/demuxer iteration (different iteration
function, registration functions deprecated), and the renaming of
AVFormatContext.filename to url (plus making it a malloced string).
Libav doesn't have the new API yet, so it will break. I hope they will
add the new APIs too.
This switches the default away from "bob" to the best algorithm reported
as supported by the driver. This is convenient for users, and there is
no reason to use something worse by default.
Untested.
Before this, we made deinterlacing dependent on the video codec metadata
(AVFrame.interlaced_frame for libavcodec). So even if --deinterlace=yes
was set, we skipped deinterlacing if the flag wasn't set. This is very
unreliable and there are many streams with flags incorrectly set.
The potential problem is that this might upset people who alwase enabled
deinterlace and hoped it worked. But it's likely these people were
screwed by this setting anyway. The new behavior is less tricky and
easier to understand, and this preferable. Maybe one day we could
introduce a --deinterlace=auto, which does the right thing, but of
course this would be hard to implement (esecially with hwdec).
Fixes#5219.
This allows the new GPU screenshot functionality introduced in
9f595f3a80 to work with the D3D11 backend. It replaces the old window
screenshot functionality, which was shared between D3D11 and ANGLE. The
old code can be removed, since it's not needed by ANGLE anymore either.
Similar spirit to edb4970ca8. check_gl_features() has a confusing
early-return. This also adds compute_hdr_peak to the list of options
that is copied to the dumb-mode options struct, since it seems to make a
difference. Otherwise it would be impossible to disable HDR peak
detection in dumb mode.
this is meant to replace the old and not properly working vo_gpu/opengl
cocoa backend in the future. the problems are various shortcomings of
Apple's opengl implementation and buggy behaviour in certain
circumstances that couldn't be properly worked around. there are also
certain regressions on newer macOS versions from 10.11 onwards.
- awful opengl performance with a none layer backed context
- huge amount of dropped frames with an early context flush
- flickering of system elements like the dock or volume indicator
- double buffering not properly working with a none layer backed context
- bad performance in fullscreen because of system optimisations
all the problems were caused by using a normal opengl context, that
seems somewhat abandoned by apple, and are fixed by using a layer backed
opengl context instead. problems that couldn't be fixed could be
properly worked around.
this has all features our old backend has sans the wid embedding,
the possibility to disable the automatic GPU switching and taking
screenshots of the window content. the first was deemed unnecessary by
me for now, since i just use the libmpv API that others can use anyway.
second is technically not possible atm because we have to pre-allocate
our opengl context at a time the config isn't read yet, so we can't get
the needed property. third one is a bit tricky because of deadlocking
and it needed to be in sync, hopefully i can work around that in the
future.
this also has at least one additional feature or eye-candy. a properly
working fullscreen animation with the native fs. also since this is a
direct port of the old backend of the parts that could be used, though
with adaptions and improvements, this looks a lot cleaner and easier to
understand.
some credit goes to @pigoz for the initial swift build support which
i could improve upon.
Fixes: #5478, #5393, #5152, #5151, #4615, #4476, #3978, #3746, #3739,
#2392, #2217
early flushing only caused problems on macOS, which includes:
- performance problems and huge amount of dropped frames
- problems with playing back video files with fps close to the display
refresh rate
- rendering at twice the rate of the video fps
- not properly detected display refresh rate
we always deactivate any early flush for macOS to fix these problems.
This commit allows for video to be shown with the right aspect even when
pixels are not square in the selected drm mode. For example, if drm mode
5 is "640x400", the right aspect on a 4:3 monitor is obtained by mpv
--vo=drm --drm-mode=5 --monitorpixelaspect=5:6 ...
Other vo's seem to make this parameter change the size of the window,
but in the drm vo this is fixed, being as large as the screen.
The last image is stored in vo->priv->last_input to be used when
redrawing a frame is necessary (control: VOCTRL_REDRAW_FRAME). At the
beginning it is NULL, so a redraw request has no effect since
draw_image ignores calls with image=NULL.
When using --force-window the size of the image may change without the
vo structure being re-created. Before this commit, the size of
vo->priv->last_input could become inconsistent with the cropping
rectangle vo->priv->src_rc, which could trigger an assert in
mp_image_crop_rc(). Even if it did not, the last image of a video
remained on the screen when the next file in the playlist had no video
(e.g., it was an mp3 without an embedded cover).
This commit deallocates and resets to NULL the image
vo->priv->last_input when reconfiguring video.
Before this commit, the drm vo drew the osd over the scaled image, and
then copied the result onto the framebuffer, shifted. This made the
frame centered, but forced the osd to be only as large as the image.
This was inconsistent with other vo's, covered the image with the
progress indicator even when a black band was at the top of the screen,
made the progress indicator wrap on narrow videos, etc.
The change is to always use an image as large as the screen. The frame
is copied scaled and shifted to it, and the osd drawn over it. The
result is finally copied to the framebuffer without any shift, since it
is already as large as it.
Technically, cur_frame is an image as large as the screen and
cur_frame_cropped is a dummy reference to it, cropped to the size of
the scaled video. This way, copying the scaled image to
cur_frame_cropped positions the image in the right place in cur_frame,
which can then have the osd added to it and copied to the framebuffer.
Using the GL renderer for color conversion will make sure screenshots
will use the same conversion as normal video rendering. It can do this
for all types of screenshots.
The logic when to write 16 bit PNGs changes. To approximate the old
behavior, we decide by looking whether the source video format has more
than 8 bits per component. We apply this logic even for window
screenshots. Also, 16 bit PNGs now always include an unused alpha
channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no
RGB064. RGB48 is 3 bytes and usually not supported by GPUs for
rendering, so we have to use RGBA64, which forces an alpha channel.
Will break for users who use --target-trc and similar options.
I considered creating a new gl_video context, but it could double GPU
memory use, so I didn't.
This uses FBOs instead of glGetTexImage(), because that increases the
chance it could work on GLES (e.g. ANGLE). Untested. No support for the
Vulkan and D3D11 backends yet.
Fixes#5498. Also fixes#5240, because the code for reading back is not
used with the new code path.
The re-ordering of commits e3d93fd and 0870859 ended up swallowing the
change which made the HDR tone mapping algorithm actually check for
RA_CAP_NUM_GROUPS support.
The major changes are as follows:
1. Use `uint32_t` instead of `unsigned int` for the SSBO size
calculation. This doesn't really matter, since a too-big buffer will
still work just fine, but since `uint` is a 32-bit integer by
definition this is the correct way to do it.
2. Pre-divide the frame_sum by the num_wg immediately at the end of a
frame. This change was made to prevent overflow. At 4K screen size,
this code is currently already very at risk of overflow, especially
once I started playing with longer averaging sizes. Pre-dividing this
out makes it just about fit into 32-bit even for worst-case PQ
content. (It's technically also faster and easier this way, so I
should have done it to begin with). Rename `frame_sum` to `frame_avg`
to clearly signal the change in semantics.
3. Implement a scene transition detection algorithm. This basically
compares the current frame's average brightness against the
(averaged) value of the past frames. If it exceeds a threshold, which
I experimentally configured, we reset the peak detection SSBO's state
immediately - so that it just contains the current frame. This
prevents annoying "eye adaptation"-like effects on scene transitions.
4. As a result of the previous change, we can now use a much larger
buffer size by default, which results in a more stable and less
flickery result. I experimented with values between 20 and 256 and
settled on the new value of 64. (I also switched to a power-of-2
array size, because I like powers of two)