This was obviously nonsense, and a previous "fix" to this code was
nonsense too. What is really needed here is temporarily dropping the
lock while calling destroy_vs()/reinit_vs().
Fixes#5470.
Unknown frames were not freed properly. Although this doesn't really
happen anyway, because we're never going to feed audio frames to a video
filter chain. Since it's theoretically possible, and all other filters
handle this consistently, fix it anyway.
Properly initialize the output frame parameters other than image format
and size. This includes colorspace hints. (We're still not reading them
back from VapourSynth if it sets them, though. Usually it doesn't
anyway.)
VapourSynth can't pass through timestamps, only frame durations. So we
need to remember the timestamp of the very first frame passed to it.
This was accidentally set to 0 instead of NOPTS on init, so inserting
the filter during playback could show strange behavior.
Might be part of #5470.
Move dec_video.c to filters/f_decoder_wrapper.c. It essentially becomes
a source filter. vd.h mostly disappears, because mp_filter takes care of
the dataflow, but its remains are in struct mp_decoder_fns.
One goal is to simplify dataflow by letting the filter framework handle
it (or more accurately, using its conventions). One result is that the
decode calls disappear from video.c, because we simply connect the
decoder wrapper and the filter chain with mp_pin_connect().
Another goal is to eventually remove the code duplication between the
audio and video paths for this. This commit prepares for this by trying
to make f_decoder_wrapper.c extensible, so it can be used for audio as
well later.
Decoder framedropping changes a bit. It doesn't seem to be worse than
before, and it's an obscure feature, so I'm content with its new state.
Some special code that was apparently meant to avoid dropping too many
frames in a row is removed, though.
I'm not sure how the source code tree should be organized. For one,
video/decode/vd_lavc.c is the only file in its directory, which is a bit
annoying.
This is preparation for a change in vd_lavc.c: it should not have to
access the demuxer (to pass along closed captions), so the idea is to
make them part of mp_image, and to let the layer above vd_lavc propagate
the buffer.
Don't bother with preserving them for mp_image->AVFrame, because we
don't need this.
Reduce the trivial but still annoying code duplication in
mp_image_new_ref(), which has to create new buffer references and deal
with possible failure of creating them. The tricky part is that if
creating a reference fails, we must set the target to NULL, so that
unreferencing the failed new mp_image reference does not release the
buffer references of the original mp_image. For the same reason, the
code can't jump to error handling when it can't create a new reference,
and has to set a flag instead.
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.
This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.
vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.
f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).
The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.
Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)
In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.
vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.
The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.
Exporting VO capabilities is still a big mess (mp_stream_info stuff).
The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.
f_async_queue is unused.
The RA_CAP_FRAGCOORD checks apply to dumb mode as well, but they were
after the check for dumb mode, which returns early, so they never ran.
Fixes#5436
Using vdpau will allocate additional textures for the reinterleaving
step, which uninit_rendering() will free. This is a problem because the
hwdec image remains mapped when reinitializing, so the reinterleaving
textures are turned into dangling pointers. Fix this by freeing the
reinterleave textures on full uninit instead.
Fixes#5447.
I found that at least for mjpeg streams, FFmpeg will set packet pts/dts
anyway. The mjpeg raw video demuxer (along with some other raw formats)
has a "framerate" demuxer option which defaults to 25, so all mjpeg
streams will be played at 25 FPS by default.
mpv doesn't like this much. If AVFMT_NOTIMESTAMPS is set, it prints a
warning, that might print a bogus FPS value for the assumed framerate.
The code was originally written with the assumption that FFmpeg would
not set pts/dts for such formats, but since it does, the printed
estimated framerate will never be used. --fps will also not be used by
default in this situation.
To make this hopefully less confusing, explicitly state the situation
when the AVFMT_NOTIMESTAMPS flag is set, and give instructions how to
work it around.
Also, remove the warning in dec_video.c. We don't know what FPS it's
going to assume anyway. If there are really no timestamps in the stream,
it will trigger our normal missing pts workaround. Add the assumed FPS
there.
In theory, we could just clear packet timestamps if AVFMT_NOTIMESTAMPS
is set, and make up our own timestamps. That is non-trivial for advanced
video codecs like h264, so I'm not going there. For seeking and
buffering estimation the situation thus remains half-broken.
This is a mitigation for #5419.
It was actually already implemented as ta_dup_ptrtype(), but that seems
like a clunky name. Also we still use the talloc_ names throughout the
source, and I'd rather use an old name instead of a mixing inconsistent
naming conventions.
mp_sws_set_from_cmdline() has the only purpose to respect the --sws-
command line options. Instead of forcing callers to get the option
struct containing these, let callers pass mpv_global, and get it from
the option core code directly. This avoids minor annoyances later on.
FFmpeg has its own rather "special" image pools (AVHWFramesContext)
specifically for hardware decoding. So it's not really practical to use
our own pool implementation. Add these helpers, which make it easier to
use FFmpeg's code in mpv.
This fixes that AVFrames passing through libavfilter (such as with
--lavfi-complex) implicitly stripped some fields. I'm not actually sure
what to do with the mp_image_params.color.light field here (what happens
if the colorspace changed?) - there is no equivalent in AVFrame or
FFmpeg at all.
It did not affect the old --vf code, because it doesn't allow
libavfilter to change the metadata.
Also log the .light field in verbose mode.
DR (direct rendering) works by having the decoder decode into the GPU
staging buffers, instead of copying the video data on texture upload. We
did this even for formats unsupported by the GPU or the renderer. This
"worked" because the staging memory is untyped, and the video frame was
converted by libswscale to a supported format, and then uploaded with a
copy using the normal non-DR texture upload path.
Even though it "works", we don't gain anything from using the staging
buffers for decoding, since we can't use them for upload anyway. Also,
staging memory might be potentially limited (what really happens is up
to the driver). It's easy to avoid, so just skip it in these cases.
The check_gl_features(p) call here checks whether dumb mode can be used.
It uses the field use_integer_conversion, which is set _after_ the call
in the same function. Move check_gl_features() to the end of the
function, when use_integer_conversion is finally set.
Fixes that it tried to use bilinear filtering with integer textures. The
bug disabled the code that is supposed to convert it to non-integer
textures.
This segfaults otherwise. The conditional is needed to break a circular
dependency (gl_init depends on mpgl_load_functions which depends on
recreate_dispmanx which calls gl_ctx_resize).
Fixes#5398
Remove the max_count creation parameter, because it's pointless and
rarely ever did anything. Add a talloc parent parameter instead (which
is something completely different, but convenient, and all callers needs
to be changed anyway).
Instead of clearing the pool when the now removed maximum is reached,
clear it on image parameter changes instead.
If feed_packet() ended with DATA_WAIT, the player should have gone to
sleep, until the demuxer wakes it up again when there is new data. But
the call to read_frame() unconditionally overwrote this status code, so
it never waited. The consequence was that the core burned CPU by
effectively polling the demuxer status, which was noticeable especially
when seeking in network streams (since seeking is async, decoders will
start out with having to wait for network).
Regression since commit 33e5755c.
This enables DXVA2 hardware decoding with ra_d3d11. It should be useful
for Windows 7, where D3D11VA is not available. Images are transfered
from D3D9 to D3D11 using D3D9Ex surface sharing[1].
Following Microsoft's recommendations, it uses a queue of shared
surfaces, similar to Microsoft's ISurfaceQueue. This will hopefully
prevent surface sharing from impacting parallelism and allow multiple
D3D11 frames to be in-flight at once.
[1]: https://msdn.microsoft.com/en-us/library/windows/desktop/ee913554.aspx
In a lost device scenario, resize() will fail and p->backbuffer will be
NULL. We can't recover from lost devices yet, but we should still check
for a NULL backbuffer in start_frame() rather than crashing.
Also remove a NULL check for p->swapchain. This was a red herring, since
p->swapchain never becomes NULL in an error condition, but p->backbuffer
actually does.
This should fix the crash in #5320, but it doesn't fix the underlying
reason for the lost device (which is probably a driver bug.)
Previously, mpv would attempt to use a BGRA swapchain in the hope that
it would give better performance, since the Windows desktop is also
composited in BGRA. In practice, it seems like there is no noticable
performance difference between RGBA and BGRA swapchains and BGRA
swapchains cause trouble with a42b8b1142, which attempts to use the
swapchain format for intermediate FBOs, even though D3D11 does not
guarantee BGRA surfaces will work with UAV typed stores.
The old code tried to make sure at all times to try to read a new
packet. Only once that was read, it tried to retrieve new video or audio
frames the decoder might already have decoded.
Change this to strictly read frames from the decoder until it signals
that it wants a new packet, and only then read and feed a new packet.
This is in theory nicer, follows the libavcodec recommended data flow,
and and reduces the minimum latency by 1 frame.
This merely requires switching the order in which those calls are done.
Normally, the decoder will return only 1 frame until a new packet is
required. If we would just feed it 1 packet, return DATA_AGAIN, and wait
until the next frame is decoded, we would run the playloop 1 time too
often for no reason (which is fine but might have some overhead). To
avoid this, try to read a frame again after possibly feeding a packet.
For this reason, move the feed/read code to its own functions each,
instead of merely moving the code.
The audio and video code for this particular thing is basically
duplicated. The idea is to unify them one day, so make the change to
both. (Doing this for video is the real motivation for this change, see
below.)
The video code change is slightly more complicated, because we have to
care about the framedrop counting (which is just a heuristic, but for
now considered better than nothing, and possibly considered required to
warn the user of framedrops happening - maybe).
Apparently this change helps with stalling streams on Android with the
mediacodec wrapper and mpeg2 decoder implementations which deinterlace on
decoding (and return 2 frames per packet).
Based on an idea and observations by tmm1.
Uses the EGL width/height by default when the user fails to set
the android-surface-width/android-surface-height options.
This means the vo-resize command is optional, and does not need to
be implemented on android devices which do not support rotation.
Signed-off-by: Aman Gupta <aman@tmm1.net>
Apparently some Intel drivers have a bug where copying from staging
buffers to constant buffers does not work. We used to keep a copy of the
buffer data in a staging buffer to enable partial constant buffer
updates. To work around this bug, keep the copy in talloc-allocated
system memory instead.
There doesn't seem to be any noticable performance difference from
keeping the copy in system memory. Our cbuffers are probably too small
for it to matter anyway.
See also: https://crbug.com/593024Fixes#5293
This fixes when resuming certain broken h264 files encoded by x264. See
FFmpeg commit 840b41b2a643fc8f0617c0370125a19c02c6b586 about the x264
bug itself.
Normally, the unregistered user data SEI (that contains the x264 version
string) is informational only. But libavcodec uses it to workaround a
x264 bug, which was recently fixed in both libavcodec and x264. The fact
that both encoder and decoder were buggy is the reason that it was not
found earlier, and there are apparently a lot of files around created by
the broken decoder. If libavcodec sees the SEI, this bug can be worked
around by using the old behavior.
If you resume a file with mpv (i.e. seeking when the file loads),
libavcodec never sees the first video packet. Consequently it has to
assume the file is not broken, and never applies the workaround,
resulting in garbage being played.
Fix this by always feeding the first video packet to the decoder on
init, and then flushing the codec (to avoid that an unwanted image is
output). Flushing the codec does not remove info such as the x264
version. We also abuse the fact that the first avcodec_send_packet()
always pushes the frame into the decoder (so we don't have to trigger
the decoder by requsting an output frame).
Technically, the user could just use --vd-lavc-o with the same result.
But I find it better to make this an explicit option, so we can document
the ups and downs, and also avoid setting it for non-h264.
This means that we now explicitly set an interval of 1. Although that
should be the EGL default, some drivers could possibly ignore this
(unconfirmed). In any case, this commit also allows disabling vsync, for
users who want it.
Crashed when no vdpau device was loaded. Also there was a mistake of not
setting p->ctx, which broke software surface input mode. This was not
found before, because p->ctx is not needed for anything else.
Fixes#5294.
A release has been made, so drop options deprecated for that release.
Also drop some options which have been deprecated a much longer time
before.
Also fix a typo in client-api-changes.rst.
The queue family index and the queue info index are not necessarily the
same, so we're forced to do a check based on the queue family index
itself.
Fixes#5049
A vulkan validation layer update pointed out that this was wrong; we
still need to use the access type corresponding to the stage mask, even
if it means our code won't be able to skip the pipeline barrier (which
would be wrong anyway).
In additiona to this, we're also not allowed to specify any source
access mask when transitioning from top_of_pipe, which doesn't make any
sense anyway.
Async compute in particular seems to cause problems on some drivers, and
even when supprted the benefits are not that massive from the tests I
have seen, so it's probably safe to keep off by default.
Async transfer on the other hand seems to work better and offers a more
substantial improvement, so it's kept on.
This gets confused by e.g. SPARSE_BIT on the TRANSFER_BIT, leading to
situations where "more specialized" is ambiguous and the logic breaks
down. So to fix it, only compare the subset we care about.
blit() implies scaling, copy() is the equivalent command to use when the
formats are compatible (same pixel size) and the rects have the same
dimensions.
This allows RAs with support for non-opaque FBO formats to use a more
appropriate FBO format for the output tex, possibly enabling a more
efficient blit operation.
This requires distinguishing between real formats (which can be used to
create textures) and fake formats (e.g. ra_gl's FBO hack).
On AMD devices, we only get one graphics pipe but several compute pipes
which can (in theory) run independently. As such, we should prefer
compute shaders over fragment shaders in scenarios where we expect them
to be better for parallelism.
This is amusingly trivial to do, and actually improves performance even
in a single-queue scenario.
Instead of using a single primary queue, we generate multiple
vk_cmdpools and pick the right one dynamically based on the intent.
This has a number of immediate benefits:
1. We can use async texture uploads
2. We can use the DMA engine for buffer updates
3. We can benefit from async compute on AMD GPUs
Unfortunately, the major downside is that due to the lack of QF
ownership tracking, we need to use CONCURRENT sharing for all resources
(buffers *and* images!). In theory, we could try figuring out a way to
get rid of the concurrent sharing for buffers (which is only needed for
compute shader UBOs), but even so, the concurrent sharing mode doesn't
really seem to have a significant impact over here (nvidia). It's
possible that other platforms may disagree.
Our deadlock-avoidance strategy is stupidly simple: Just flush the
command every time we need to switch queues, and make sure all
submission and callbacks happen in FIFO order. This required lifting the
cmds_pending and cmds_queued out from vk_cmdpool to mpvk_ctx, and some
functions died/got moved as a result, but that's a relatively minor
change.
On my hardware this is a fairly significant performance boost, mainly
due to async transfers. (Nvidia doesn't expose separate compute queues
anyway). On AMD, this should be a performance boost as well due to async
compute.
This is especially interesting for vulkan since it allows completely
skipping the layout transition as part of the renderpass. Unfortunately,
that also means it needs to be put into renderpass_params, as opposed to
renderpass_run_params (unlike #4777).
Closes#4777.
This uses the new vk_signal mechanism to order all access to textures.
This has several advantageS:
1. It allows real synchronization of image access across multiple frames
when using multiple queues for parallelism.
2. It allows using events instead of pipeline barriers, which is a
finer-grained synchronization primitive that allows for more
efficient layout transitions over longer durations.
This commit also restructures some of the implicit transition code for
renderpasses to be more flexible and correct. (Note: this technically
drops the ability to transition the image out of undefined layout when
not blending, but that was a bug anyway and needs to be done properly)
vo_gpu: vulkan: remove no-longer-true optimization
The change to the output_tex format makes this no longer true, and it
actually seems to hurt performance now as well. So just don't do it
anymore. I also realized it hurts performance when drawing an OSD, so
it's probably not a good idea anyway.
This combines VkSemaphores and VkEvents into a common umbrella
abstraction which can resolve to either.
We aggressively try to prefer VkEvents over VkSemaphores whenever the
conditions are met (1. we can unsignal the semaphore, i.e. it comes from
the same frame; and 2. it comes from the same queue).
Instead of being submitted immediately, commands are appended into an
internal submission queue, and the actual submission is done once per
frame (at the same time as queue cycling). Again, the benefits are not
immediately obvious because nothing benefits from this yet, but it will
make more sense for an upcoming vk_signal mechanism.
This also cleans up the way the ra_vk submission interacts with the
synchronization/callbacks from the ra_vk_ctx. Although currently, the
way the dependency is signalled is a bit hacky: normally it would be
associated with the ra_tex itself and waited on in the appropriate stage
implicitly. But that code is just temporary, so I'm keeping it in there
for a better commit order.
Instead of associating a single VkSemaphore with every command buffer
and allowing the user to ad-hoc wait on it during submission, make the
raw semaphores-to-signal array work like the raw semaphores-to-wait-on
array. Doesn't really provide a clear benefit yet, but it's required for
upcoming modifications.
1. No more static arrays (deps / callbacks / queues / cmds)
2. Allows safely recording multiple commands at the same time
3. Uses resources optimally by never over-allocating commands
Libav has been broken due to the hwdec changes. This was always a
temporary situation (depended on pending patches to be merged), although
it took a bit longer. This also restores the travis config.
One code change is needed in vd_lavc.c, because it checks the AV_PIX_FMT
for videotoolbox (as opposed to the mpv format identifier), which is not
available in Libav. Add an ifdef; the affected code is for a deprecated
option anyway.
This hack was part of a solution to VSync judder in desktop OpenGL on
Windows. Rather than using blocking-SwapBuffers(), mpv could use
DwmFlush() to wait for the image to be presented by the compositor.
Since this would only work while the compositor was running, and the
compositor was silently disabled when OpenGL entered exclusive
fullscreen mode, mpv needed a way to detect exclusive fullscreen mode.
The code that is being removed could detect exclusive fullscreen mode by
checking the state of an undocumented mutex using undocumented native
API functions, but because of how fragile it was, it was always meant to
be removed when a better solution for accurate VSync in OpenGL was
found. Since then, mpv got the dxinterop backend, which uses desktop
OpenGL but has accurate VSync. It also got a native Direct3D 11 backend,
which is a viable alternative to OpenGL on Windows.
For people who are still using desktop OpenGL with WGL, there shouldn't
be much of a difference, since mpv can use other API functions to detect
exclusive fullscreen.
Refactored and split the `reinit_window_state` code into four
separate functions:
- `update_window_style` used to update window styles without
modifying the window rect.
- `fit_window_on_screen` used to adjust the window size when it is
larger than the screen size. Added a helper function `fit_rect` to
fit one rect on another without using any data from w32 struct.
- `update_fullscreen_state` used to calculate the new fullscreen
state and adjust the window rect accordingly.
- `update_window_state` used to display the window on screen with
new size, position and ontop state.
This commit fixes three issues:
- fixed#4753 by skipping `fit_window_on_screen` for a maximized
window, since maximized window should already fit on the screen.
It should be noted that this bug was only reproducible with
`--fit-border` option which is enabled by default. The cause of the
bug is that after calling the `add_window_borders` for a maximized
window, the rect in result is slightly larger than the screen rect,
which is okay, `SetWindowPos` will interpret it as a maximized state
later, so no auto-fitting to screen size is needed here.
- fixed#5215 by skipping `fit_window_on_screen` when leaving fullscreen.
On a multi-monitor system if the mpv window was stretched to cover
multiple monitors, its size was reset after switching back from
fullscreen to fit the size of the active monitor. Also, when changing
`--ontop` and `--border` options, now only the
`update_window_style` and `update_window_state` functions are used,
so `fit_window_on_screen` is not used for them too.
- fixed#2451 by moving the `ITaskbarList2_MarkFullscreenWindow`
below the `SetWindowPos`. If the taskbar is notified about fullscreen
state before the window is shown on screen, the taskbar button could
be missing until Alt-TAB is pressed, usually it was reproducible on
Windows 8.
Other changes:
- In `update_fullscreen_state` the `reset window bounds` debug
message now reports client area size and position, instead of window area
size and position. This is done for consistency with debug messages
in handling fullscreen state above in this function, since they also print
window bounds of the client area.
- Refactored `gui_thread_reconfig`. Added a new window flag `fit_on_screen`
to fit the window on screen even when leaving fullscreen. This is needed
for the case when the new video opened while the window is still in the
fullscreen state.
- Moved parent and fullscreen state checks out from the WM_MOVING to
`snap_to_screen_edges` function for consistency with other functions.
There's no point in keeping these checks out of the function body.
When window and screen size and position are stored in RECT, it's
much easier to modify them using WinAPI functions.
Added two macros to get width and height of the rect.
I've decided that MP_TRACE means “noisy spam per frame”, whereas
MP_DBG just means “more verbose debugging messages than MSGL_V”.
Basically, MSGL_DBG shouldn't create spam per frame like it currently
does, and MSGL_V should make sense to the end-user and provide mostly
additional informational output.
MP_DBG is basically what I want to make the new default for --log-file,
so the cut-off point for MP_DBG is if we probably want to know if for
debugging purposes but the user most likely doesn't care about on the
terminal.
Also, the debug callbacks for libass and ffmpeg got bumped in their
verbosity levels slightly, because being external components they're a
bit less relevant to mpv debugging, and a bit too over-eager in what
they consider to be relevant information.
I exclusively used the "try it on my machine and remove messages from
MSGL_* until it does what I want it to" approach of refactoring, so
YMMV.
Annoying exception that makes no sense to keep. Normally, users or
client applications will either use --hwdec=auto, or not set the option
at all, which both leads to the expected result.
Full range YUV causes problems everywhere. For example it's usually the
wrong choice when using encoding mode, and libswscale sometimes messes
up when converting to full range too. (In this partricular case, we
found that converting rgba->yuv420p16 full range actually seems to
output limited range.)
This actually restores a similar heueristic from the late vf_scale.c.
When autoprobing the hwdec interops (which now happens to all compiled
interops if hardware decoding is used), failure to load an interop
should not print an error in the normal case. So hide it.
(We could make the log level conditional on whether autoprobing is used,
but directly loading it without autoprobing is obscure, and most other
interops don't do this either.)
For METHOD_INTERNAL hwdecs (non-copy cases), make sure the VO interops
are always loaded, because those decoders will output hardware pixel
formats, which will need special support in vo_gpu. Otherwise,
initialization will fail, complaining that it can't convert the output
format to something the VO supports.
* Distinguish between the window being moved or not.
* Skip trying to snap if currently in full screen or an embedded
window.
* Exit snapped state if the size changed when the window was being
moved.
Check the expected width and height against up-to-date
window placement. If they do not match, we will consider snapping
to have happened on Windows' side.
Fixes display-sync (though if you change virtual desktops you'll need to seek
to re-enable display-sync) partially under wayland.
As an advantage, rendering is completely disabled if you change desktops or
alt+tab so you lose no performance if you leave mpv running elsewhere as long
as it isn't visible.
This could also be ported to other VOs which supports it.
We need to support hardware/drivers which do not support ARGB8888 in
their primary plane.
We also use p->primary_plane_format when creating the gbm surface, to
make sure it always matches (in actuality there should be little
difference).
Passing in an invalid DRM overlay id with the --drm-overlay option would
cause drmplane to be freed twice: once in the for-loop and once at the
error-handler label fail.
Solve by setting drmpanel to NULL after freeing it.
Also the 'return false' statement after the error handler label should
probably be 'return NULL', given that the return type of
drm_atomic_create_context returns a pointer.
vo_x11 and vo_xv need this. According to the Linux manpage, all involved
functions are POSIX-2001 anyway. (I just assumed they were not, because
they're mostly System V UNIX legacy garbage.)
If the codec uses AV_CODEC_HW_CONFIG_METHOD_INTERNAL, and we're using
the -copy method, then don't request the native pix_fmt. It might not
have a AVFrame.hw_frames_ctx set, and we couldn't read back at all. On
top of that, most of those decoders probably don't provide read-back
when using such opaque formats anyway, while providing separate decoding
modes to decode to RAM.
Finally get rid of all the HWDEC_* things, and instead rely on the
libavutil equivalents. vdpau still uses a shitty hack, but fuck the
vdpau code.
Remove all the now unneeded remains. The vdpau preemption thing was not
unused anymore; if someone cares this could probably be restored.
This code is for trying to avoid using an emulation layer when using
auto probing, so that we end up using the actual API the drivers
provide. It was destroyed in the recent refactor.
With the recent changes, mpv's internal mechanisms got synced to
libavcodec's once more. Some things are still needed for filters (until
the mechanism gets replaced), but there's no need to require other hwdec
methods to use these fields. So remove them where they are unnecessary.
Also fix some minor leaks in the dxva2 backends, and set the driver_name
field in the Apple ones. Untested on Apple crap.
Otherwise, if e.g. "nvdec" didn't work, but "nvdec-copy" did, it would
never try "vdpau", which is actually the next non-copy mode on the
autprobe list. It's really expected that it selects "vdpau". Fix this by
sorting the -copy modes to the end of the final hwdec list.
But we still don't want preferred -copy modes like "nvdec-copy" to be
sorted after fragile non-preferred modes like "cuda", and --hwdec=auto
should prefer "nvdec-copy" over it, so make sure the copying mode does
not get precedence over preferred vs. non-preferred mode.
Also simplify the existing auto_pos sorting condition, and fix the
fallback sort order (although that doesn't matter too much).
Change it from explicit metadata about every hwaccel method to trying to
get it from libavcodec. As shown by add_all_hwdec_methods(), this is a
quite bumpy road, and a bit worse than expected.
This will probably cause a bunch of regressions. In particular I didn't
check all the strange decoder wrappers, which all cause some sort of
special cases each. You're volunteering for beta testing by using this
commit.
One interesting thing is that we completely get rid of mp_hwdec_ctx in
vd_lavc.c, and that HWDEC_* mostly goes away (some filters still use it,
and the VO hwdec interops still have a lot of code to set it up, so it's
not going away completely for now).
The libavcodec mediacodec support does not conform to the new hwaccel
APIs yet. It has been agreed uppon that this glue code can be deleted
for now, and support for it will be restored at a later point.
Readding would require that it supports the AVCodecContext.hw_device_ctx
API. The hw_device_ctx would then contain the surface ID.
vo_mediacodec_embed would actually perform the task of creating
vo.hwdec_devs and adding a mp_hwdec_ctx, whose av_device_ref is a
AVHWDeviceContext containing the android surface.
It makes more sense to have it in the general video directory (along
with vdpau.c and vaapi.c), since the decoder source files don't even
access it anymore.
Like with all hwaccels, there's little that is actually specific to
decoding (which has been moved away anyway), and what is left are
declarations (which will also go away soon).
Lots of shit code for nothing. We probably could just use libavutil's
code for all of this. But for now go with this, since it tends to
prevent stupid terminal messages during probing (libavutil has no
mechanism to selectively suppress errors specifically during probing).
Ignores the "emulated" API flag (for avoiding vaapi/vdpau wrappers), but
it doesn't matter that much for -copy anyway.
This leaked 2 unreffed AVFrame structs (roughly 1KB) per decoded frame.
Can I blame the FFmpeg API and the weird difference between freeing and
unreffing an AVFrame?
The idea is to get rid of vd_lavc_hwdec, so special functionality like
this has to go somewhere else. At this point, hwframes_refine is only
needed for d3d11, and it doesn't do much, so for now the new callback
has no context. In can be made more fancy if really needed.
The testing_only field is not referenced anymore with vaglx removed and
the previous commit dropping all uses.
The ra_hwdec_driver.api field became unused with the previous commit,
but all hwdec interop drivers still initialized it.
Since this touches highly OS-specific code, build regressions are
possible (plus the previous commit might break hw decoding at runtime).
At least hwdec_cuda.c still used the .api field, other than initializing
it.
Make the VO<->decoder interface capable of supporting multiple hwdec
APIs at once. The main gain is that this simplifies autoprobing a lot.
Before this change, it could happen that the VO loaded the "wrong" hwdec
API, and the decoder was stuck with the choice (breaking hw decoding).
With the change applied, the VO simply loads all available APIs, so
autoprobing trickery is left entirely to the decoder.
In the past, we were quite careful about not accidentally loading the
wrong interop drivers. This was in part to make sure autoprobing works,
but also because libva had this obnoxious bug of dumping garbage to
stderr when using the API. libva was fixed, so this is not a problem
anymore.
The --opengl-hwdec-interop option is changed in various ways (again...),
and renamed to --gpu-hwdec-interop. It does not have much use anymore,
other than debugging. It's notable that the order in the hwdec interop
array ra_hwdec_drivers[] still matters if multiple drivers support the
same image formats, so the option can explicitly force one, if that
should ever be necessary, or more likely, for debugging. One example are
the ra_hwdec_d3d11egl and ra_hwdec_d3d11eglrgb drivers, which both
support d3d11 input.
vo_gpu now always loads the interop lazily by default, but when it does,
it loads them all. vo_opengl_cb now always loads them when the GL
context handle is initialized. I don't expect that this causes any
problems.
It's now possible to do things like changing between vdpau and nvdec
decoding at runtime.
This is also preparation for cleaning up vd_lavc.c hwdec autoprobing.
It's another reason why hwdec_devices_request_all() does not take a
hwdec type anymore.
nvdec aka cuvid aka cuda should work much better than vdpau, and support
newer codecs (such as vp9), and more advanced surface formats (like 10
bit).
This requires moving the d3d hwaccels in the autoprobe order, since on
Windows, d3d decoding should be preferred over nvidia proprietary stuff.
Users of older drivers will need to force --hwdec=vdpau, since it could
happen that the vo_gpu cuda hwdec interop loads (so the vdpau interop is
not loaded), but the hwdec itself doesn't work.
I expect this does not break AMD (which still needs vdpau for vo_gpu
interop, until libva is fixed so it can fully support AMD).
This has stopped being useful a long time ago, and it's the only GPL
source file in the vo_gpu source directories. Recently it wasn't even
loaded at all, unless you forced loading it.
They were added to the "to deleted" list and never relicensed, because I
thought I'd delete them early. But it's possible that they'll stay in
mpv for a longer time, so relicense them. Still leaving them as
deprecated and scheduled for removal, so they can still be dropped once
there is a better way to deal with them, if they get annoying, or if a
better mechanism is found that makes them unnecessary.
All contributors agreed. There are some minor changes by people who did
not agree, but these are all not relevant or have been removed.
Almost all of them had their guts removed and replaced by libavfilter
long ago, but remove them anyway. They're pointless and have been
scheduled for deprecation.
Still leave vf_format (because we need it in some form) and vf_sub (not
sure).
This will break some builtin functionality: lavfi yadif defaults are
different, auto rotation and stereo3d downconversion are broken. These
might be fixed later.
We want to drop vf_scale, but we still need a way to auto convert
between imgfmts. In particular, vf.c will auto insert the "scale" filter
if the VO doesn't support a pixfmt.
To avoid chaos, create a new vf_convert.c filter, based on vf_scale.c,
but without the unrelicensed code parts. In particular, this filter does
not do scaling and has no options. It merely converts from one imgfmt to
another, if needed.
The D3D11_CREATE_DEVICE_BGRA_SUPPORT flag doesn't enable support for
BGRA textures. BGRA textures will be supported whether or not the flag
is passed. The flag just fails device creation if they are not supported
as an API convenience for programs that need BGRA textures, such as
programs that use D2D or D3D9 interop. We can handle devices without
BGRA support fine, so don't bother with the flag.
For consistency with already implemented shcore.dll
function loading in w32->api:
Moved loading of imm32.dll to w32_api_load, and declare
pImmDisableIME function pointer in the w32->api struct.
Removed unloading of imm32.dll.
Seems like the last refactor to this code broke playing flipped images,
at least with --opengl-pbo --gpu-api=opengl.
Flipping is rather a shitmess. The main problem is that OpenGL does not
support flipped uploading. The original vo_gl implementation considered
it important to handle the flipped case efficiently, so instead of
uploading the image line by line backwards, it uploaded it flipped, and
then flipped it in the renderer (basically the upload path ignored the
flipping). The ra code and backends probably have an insane and
inconsistent mix of semantics, so fix this by never passing it flipped
images in the first place.
In the future, the backends should probably support flipped images
directly.
Fixes#5097.
Like the manual says, this is technically undefined behaviour. See:
https://msdn.microsoft.com/en-us/library/windows/desktop/ff476085.aspx
In particular, MSDN says texture arrays created with the BIND_DECODER
flag cannot be used with CreateShaderResourceView, which means they
can't be sampled through SRVs like normal Direct3D textures. However,
some programs (Google Chrome included) do this anyway for performance
and power-usage reasons, and it appears to work with most drivers.
Older AMD drivers had a "bug" with zero-copy decoding, but this appears
to have been fixed. See #3255, #3464 and http://crbug.com/623029.
The shader cache in ra_d3d11 caches the result of shaderc, crossc and
the D3DCompiler DLL, so it should be invalidated when any of those
components are updated. This should make the cache more reliable, which
makes it safer to enable gpu-shader-cache-dir. Shader compilation is
slow with D3D11, so gpu-shader-cache-dir is highly necessary
Some shaders take a _long_ time to compile with the Direct3D compiler.
The ANGLE backend had this problem too, to a certain extent. Logging
should help identify which shaders cause long stalls and could also help
with benchmarking ways of reducing compile times.
ra_d3d11 uses the SPIR-V compiler to translate GLSL to SPIR-V, which is
then translated to HLSL. This means it always exposes the same GLSL
version that the SPIR-V compiler supports (4.50 for shaderc/glslang.)
Despite claiming to support GLSL 4.50, some features that are tied to
the GLSL version in OpenGL are not supported by ra_d3d11 when targeting
legacy Direct3D feature levels.
This includes two features that mpv relies on:
- Reading from gl_FragCoord in the fragment shader (requires FL 10_0)
- textureGather from any texture component (requires FL 11_0)
These features have been exposed as new RA caps.
This is a new RA/vo_gpu backend that uses Direct3D 11. The GLSL
generated by vo_gpu is cross-compiled to HLSL with SPIRV-Cross.
What works:
- All of mpv's internal shaders should work, including compute shaders.
- Some external shaders have been tested and work, including RAVU and
adaptive-sharpen.
- Non-dumb mode works, even on very old hardware. Most features work at
feature level 9_3 and all features work at feature level 10_0. Some
features also work at feature level 9_1 and 9_2, but without high-bit-
depth FBOs, it's not very useful. (Hardware this old is probably not
fast enough for advanced features anyway.)
Note: This is more compatible than ANGLE, which requires 9_3 to work
at all (GLES 2.0,) and 10_1 for non-dumb-mode (GLES 3.0.)
- Hardware decoding with D3D11VA, including decoding of 10-bit formats
without truncation to 8-bit.
What doesn't work / can be improved:
- PBO upload and direct rendering does not work yet. Direct rendering
requires persistent-mapped PBOs because the decoder needs to be able
to read data from images that have already been decoded and uploaded.
Unfortunately, it seems like persistent-mapped PBOs are fundamentally
incompatible with D3D11, which requires all resources to use driver-
managed memory and requires memory to be unmapped (and hence pointers
to be invalidated) when a resource is used in a draw or copy
operation.
However it might be possible to use D3D11's limited multithreading
capabilities to emulate some features of PBOs, like asynchronous
texture uploading.
- The blit() and clear() operations don't have equivalents in the D3D11
API that handle all cases, so in most cases, they have to be emulated
with a shader. This is currently done inside ra_d3d11, but ideally it
would be done in generic code, so it can take advantage of mpv's
shader generation utilities.
- SPIRV-Cross is used through a NIH C-compatible wrapper library, since
it does not expose a C interface itself.
The library is available here: https://github.com/rossy/crossc
- The D3D11 context could be made to support more modern DXGI features
in future. For example, it should be possible to add support for
high-bit-depth and HDR output with DXGI 1.5/1.6.
Backported from @haasn's change to libplacebo, except in the current RA,
there's nothing to indicate an ra_format can be bound as a storage
image, so there's no way to force all of these formats to have a
glsl_format. Instead, the layout qualifier will be removed if
glsl_format is NULL.
This is needed for the upcoming ra_d3d11 backend. In Direct3D 11, while
loading float values from unorm images often works as expected, it's
technically undefined behaviour, and in Windows 10, it will cause the
debug layer to spam the log with error messages. Also, apparently in
GLSL, the format name must match the image's format exactly (but in
Direct3D, it just has to have the same component type.)
Backported from @haasn's change to libplacebo. More flexible than the
previous "shared || non-shared" distinction. The extra flexibility is
needed for Direct3D 11, but it also doesn't hurt code-wise.
For some reason vo_lavc's draw_image can buffer the frame and encode it
only later. Also, there is logic for rendering the OSD (i.e. subtitles)
only when needed.
In theory this can lead to subtitles being pruned before it tries to
render them (as the subtitle logic doesn't know that the VO still needs
them later), although this probably never happens in reality.
The worse issue, that actually happened, is that if the last frame gets
buffered, it attempts to render subtitles in the uninit callback. At
this point, the subtitle decoder is already torn down and all subtitles
removed, thus it will draw nothing. This didn't always happen. I'm not
sure why - potentially in the working cases, the frame wasn't buffered.
Since this logic doesn't have much worth, except a minor performance
advantage if frames with subtitles are dropped, just remove it.
Hopefully fixes#4689.
Repeating frames (for display-sync) is not supposed to render the entire
frame again. When using hardware decoding, it unfortunately did: the
renderer uses the frame ID to check whether the frame data changed, and
unmapping the hwdec frame clears it.
Essentially reverts commit 761eeacf54. Back then I probably
thought it would be a good idea to release the hwdec image quickly in
order to return it to the decoder, but they're referenced anyway.
This should increase the performance and reduce GPU work.
Normally such code is didsabled by have_mglsl==false in
check_gl_features(), but apparently not this one.
Just fix it. Seems also more readable.
Fixes#5069.
Apparently this is required, but it doesn't check for it. To be fair,
this was tested by creating a compatibility context and pretending it's
GL 2.1. GL_ARB_shader_storage_buffer_object actually requires GL 4.0 or
up, but GL_ARB_uniform_buffer_object requires only GL 2.0.
vo_gpu.c will call gl_video_icc_auto_enabled() to check whether it
should retrieve the ICC profile. But the value returned by this function
will be outdated, because gl_video_update_options() is not called yet.
Change the order of function calls so that this is done after updating
the options.
(This is fairly chaotic, but I guess this code will be refactored a
dozen of times anyway in the future.)
All this code used to be required by the old variants of the libavcodec
hw decoding APIs. Almost all of that is gone, although the mediacodec
API unfortunately still pulls in some old stuff (but not all of it).
(mediacodec build/functionality is untested, but should work.)
All of this was dead code and completely unused.
get_buffer2_hwdec() is the biggest chunk. One unfortunate thing about it
is that, while it was active, it could perform a software fallback much
faster, because it didn't have to wait until a full frame is decoded (it
actually decoded a full frame, but the current code has to decode many
more frames due to the codec delay, because the current code waits until
the API returns a decoded frame.) We should probably restore the latter,
although since it's an optional optimization, and the current behavior
doesn't change with the removal of this code, don't actually do anything
about it.
This is where it should be. It only wasn't because of an old libavcodec
bug, that returned the side data only on every IDR. This required some
sort of caching, which is now dropped. (mp_image wouldn't have been able
to do this kind of caching, because this code is stateless.) We don't
support these old libavcodec versions anymore, which is why this is not
needed anymore.
Also move initialization of rotation/stereo stuff to dec_video.c.
This simply didn't work. Unlike cuda-copy, this is a true hwaccel, and
obviously we need to provide it a device.
Implement this in a relatively generic way, which can probably reused
directly by videotoolbox (not doing this yet because it would require
testing on OSX).
Like with cuda-copy, --cuda-decode-device is ignored. We might be able
to provide a more general way to select devices at some later point.
This is just a dumb consequence of HWDEC_ types somehow being part of
both decoder and VO. Obviously, the VO should only care about supporting
specific hardware surface types or providing specific device types, but
until they are separated, stupid unintuitive mismatches will occur.
See manpage additions.
(In ffmpeg-mpv and Libav, this is still called "cuvid". Libav won't work
yet, because it has no frame params support yet, but this could get
fixed soon.)
This removes the need for codec- and API-specific knowledge in the
libavcodec hardware acceleration API user. For mpv, this removes the
need for vd_lavc_hwdec.pixfmt_map and a few other things. (For now, we
still keep the "old" parts for the sake of supporting older Libav, and
FFgarbage.)
params->rc was ignored in the calculation for the buffer size. I fucking
hate this stupid ra_tex_upload signature where *rc is randomly relevant
or not.
Coverity complains about this, but it's probably a false positive.
Anyway, rewrite it in a slightly more readable way. Now it's more
obvious that it is correct.
Should speed up seeks.
(Unfortunately it's useless for backstepping. Backstepping is like
precise seeking, except we're unable to drop frames, as we can't know
the previous frame if we drop it.)
Comparing mpv's implementation against the ACES ODR reference samples
and algorithms, it seems like they're happy desaturating highlights
_way_ more aggressively than mpv currently does. And indeed, looking at
some example clips like The Redwoods (which is actually well-mastered),
the current desaturation produces unnatural-looking brightness fringes
where the sky meets the treeline.
Adjust the algorithm to make it apply to a much larger, more gradual
brightness region; and change the interpretation of the parameter. As a
bonus, the new parameter is actually sanely scaled (higher values = more
desaturation). Also, make it scale based on the signal level instead of
the luminance, to avoid under-desaturating bright blues.
The new_segment field was used to track the decoder data flow handler of
timeline boundaries, which are used for ordered chapters etc. (anything
that sets demuxer_desc.load_timeline). This broke seeking with the
demuxer cache enabled. The demuxer is expected to set the new_segment
field after every seek or segment boundary switch, so the cached packets
basically contained incorrect values for this, and the decoders were not
initialized correctly.
Fix this by getting rid of the flag completely. Let the decoders instead
compare the segment information by content, which is hopefully enough.
(In theory, two segments with same information could perhaps appear in
broken-ish corner cases, or in an attempt to simulate looping, and such.
I preferred the simple solution over others, such as generating unique
and stable segment IDs.)
We still add a "segmented" field to make it explicit whether segments
are used, instead of doing something silly like testing arbitrary other
segment fields for validity.
Cached seeking with timeline stuff is still slightly broken even with
this commit: the seek logic is not aware of the overlap that segments
can have, and the timestamp clamping that needs to be performed in
theory to account for the fact that a packet might contain a frame that
is always clipped off by segment handling. This can be fixed later.
This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced
format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor
struct.
That struct holds dmabuf fds and information allowing zerocopy rendering
using KMS / DRM Atomic.
This has been tested on RockChip ROCK64 device.
Since we divide by it in a couple of places and compositors can be crazy,
its better to be safe than sorry.
Also checks cursor spawn durinig init (pointless since it does again on
cursor entry but its more correct).
It seems the cursor hadn't had its position properly adjusted when scaled.
Hence, bring back correct buffer scaling to make the cursor look fine.
Also the cursor surface now gets created sooner so that's better.
Regression since ec6e8a31e0. Removal of the explicit else case
always applies the conversion to premultiplied alpha in the else branch.
We want to scale with multiplied alpha, but we don't want to multiply
with alpha again on top of it.
Fixes#4983, hopefully.
This should be functionally identical to rgba16f, since the formats only
differ in their representation on the CPU, but it could be useful for RA
backends that don't expose rgba16f, like Vulkan. It's definitely useful
for the WIP D3D11 backend.
With video paused, changing the brightness controls (or similar) would
sometimes not rerender the video frame. So the OSD would redraw, but the
video wouldn't change. This is caused by output caching, and a redraw
request is free to return the cached frame. Change it such to invalidate
the cached frame if any of the options or the equalizer change.
In theory, gl_video_reset_surfaces() could be called if the equalizer
changes - this would apparently force interpolatzion to redraw all
frames. But this looks kind of crappy when changing the equalizer during
playback. It'll "eventually" use the correct settings anyway, and when
paused interpolation is off.
This was phased out, and was used only by vdpau by now. Drop the
mechanism and the vdpau special code, which means screenshots won't
include the vf_vdpaupp processing anymore. (I don't care enough about
vdpau, it's on its way out.)
The mechanism introduced in b135af6842 assumed AVHWFramesContext would
be enough. Apparently it's not - the intended use with Rockchip (not
Rokchip btw.) requires accessing actual frame data in order to access
the AVDRMFrameDescriptor struct.
Just pass the entire mp_image to the new function. This is more
flexible, although it slightly worries me that it will be less reusable
for things which require setting up mp_image_params before any real
frames are processed (such as filters).
The same should happen with any other side data that matters to mpv,
otherwise filters will drop it.
(No, don't try to argue that mpv should use AVFrame. That won't work.)
ffmpeg_garbage() is copy&paste from frame_new_side_data() in FFmpeg
(roughly feed201849b8f91), because it's not public API. The name
reflects my opinion about FFmpeg's API.
In mp_image_to_av_frame(), change the too-fragile
*new_ref = (struct mp_image){0};
into explicitly zeroing out the fields that are "transferred" to the
created AVFrame.
Merge mp_image_copy_fields_to_av_frame() into mp_image_from_av_frame(),
same for the other direction.
There isn't any good reason to keep them separate, and the refcounting
handling makes it only more awkward.
It seems this will be useful for Rokchip DRM hwcontext integration.
DRM hwcontexts have additional internal structure which can be different
depending on the decoder, and which is not part of the generic hwcontext
API. Rockchip has 1 layer, which EGL interop happens to translate to a
RGB texture, while VAAPI (mapped as DRM hwcontext) will use multiple
layers. Both will use sw_format=nv12, and thus are indistinguishable on
the mp_image_params level. But this is needed to initialize the EGL
mapping and the vo_gpu video renderer correctly.
We hope that the layer count is enough to tell whether EGL will
translate the data to a RGB texture (vs. 2 texture resembling raw nv12
data). For that we introduce MP_IMAGE_HW_FLAG_OPAQUE.
This commit adds the flag, infrastructure to set it, and an "example"
for D3D11.
The D3D11 addition is quite useless at this point. But later we want to
get rid of d3d11_update_image_attribs() anyway, while we still need a
way to force d3d11vpp filter insertion, so maybe it has some
justification (who knows). In any case it makes testing this easier.
Obviously it also adds some basic support for triggering the opaque
format for decoding, which will use a driver-specific format, but which
is not supported in shaders. The opaque flag is not used to determine
whether d3d11vpp needs to be inserted, though.
Mostly an obscure option for testing. But --videotoolbox-format can be
deprecated, as it becomes redundant.
We rely on the libavutil hwcontext implementation to reject invalid
pixfmts, or not to blow up if they are incompatible.
This was confusing at best. Change it to output the actual choices.
(Seems like in the end it's always me who has to clean up other people's
bullshit.)
Context names were not unique - but they should be, so fix it. The whole
point of the original --opengl-backend option was to side-step the
tricky auto-detection, so you know exactly what you get. The goal of
this commit is to make --gpu-context work the same way. Fix the
non-unique names by appending "vk" to the names.
Keep in mind that this was not suitable for slecting the "UI" backend
anyway, since "x11" would force GLX, whereas people on not-NVIDIA
actually want "x11egl". Users trying to use --gpu-context=x11 to force
the X11 backend would always end up with GLX, which would at least break
VAAPI hardware decoding for them. Basically the idea that this option
could select the "UI" type is completely broken - it selects an
implementation, which implies a UI. Selecting the UI type This would
require a separate mechanism. (Although in theory this separate
mechanism could be part of the --gpu-context option - in any case,
someone would have to implement it.)
To achieve help output that can actually be understood, just duplicate
the code. Most of that code is duplicated anyway, and trying to share
just the list code with the result of making the output unreadable
doesn't make too much sense. If we wanted to save code/effort, we could
just remove the help output altogether.
--gpu-api has non-unique entries, and it would be nice to group them
(e.g. list all OpenGL capable contexts with "opengl"), but C makes this
simple idea too much of a pain, so don't do it.
Also remove a stray tab from the android entry on the manpage.
If the chroma location is missing, vo_gpu will use centered chroma.
Select a better chroma location by default: normally, it will always be
MPEG video chroma location. If full levels are used, use JPEG chroma
location, because that sort of sounds like it could make sense as it
might coincide with JPEG being decoded.
See e.g. #4804.
Every compositor (including toy compositors) has had support for wl_output v2
since forever, so there's little point in supporting degraded output for 5 year
old releases (especially considering we require zxdg6 which is far more recent).
This adds symbol information to the generated SPIR-V, which shows up in
the SPIR-V assembly dump. It's also useful for potential RA backends
that use SPIRV-Cross, since the symbol information is used in the
generated shader source.
This should actually cover all of them, if you take into account that
some unchanged GPL source files include header files with such checks.
Also this was done already for the libaf derived code.
This is only for "safety" and to avoid misunderstandings.
It turns out compositors which do scaling scale the cursor as well,
so every single surface needs to get scaled too.
Also, 32 corresponds to the default size for both GTK+ and KDE.
This new interface in libva2 offers a cleaner way to export surfaces
which can then be imported to EGL. In particular, this works with
the Mesa driver, so we can have proper playback without a pointless
download and upload on AMD cards.
This change does nothing with libva1, and will fall back to the
libva1 interface (vaDeriveImage() + vaAcquireBufferHandle()) if
vaExportSurfaceHandle() is not present.
At the moment, rendering on Android requires ``--vo=opengl-cb`` and
a lot of java<->c++ bridging code to receive the receive and react to
the render callback in java. Performance also suffers with opengl-cb,
due to the overhead of context switching in JNI.
With this patch, Android can render using ``--vo=gpu --gpu-context=android``
(after setting ``--wid`` to point to an android.view.Surface on-screen).
MediaCodec uses a fixed number of output buffers to hold frames, and
expects that output buffers will be released as soon as possible. Once
rendered, the underlying frame is automatically released and cannot be
reused or rerendered.
The new VO_CAP_NOREDRAW forces mpv to release frames immediately after
they are rendered or dropped, to ensure that MediaCodec decoder does not
run out of buffers and stall out.
This partiular format is not marked as AV_PIX_FMT_FLAG_RGB in FFmpeg's
pixdesc table, so mpv assumed it's a YUV format.
This is a regression, since the old code in mp_imgfmt_get_desc() also
treated this format specially to avoid this problem. Another format
which was special-cased in the old code was AV_PIX_FMT_MONOBLACK, so
make an exception for it as well.
Maybe this problem could be avoided by mp_image_params_guess_csp() not
forcing certain colorimetric parameters by the implied colorspace, but
certainly that would cause other problems. At least there are mistagged
files out there that would break. (Do we actually care?)
Fixes#4965.
This commit:
- Implements output tracking (e.g. monitor plug/unplug)
- Creates the surface during registry (no other dependencies)
- Queues the callback immediately after surface creation
- Cleaner and better event handling (functions return directly)
- Better reconfigure handling (resizes reduced to 1 during init)
- Don't unnecessarily resize (if dimensions match)
Apart from that fixes 2 potential memory leaks (mime type and window
title), 2 string ownership issues (output name and make need to be
dup'd), fixes some style issues (switches were indented) and finally
adds messages when disabling/enabling idle inhibition.
The callback setter function was removed in preparation for the commit
which will use the frame event cb because it was unnecessary.
The VO code resets each flag individually, and it doesn't do it for this one.
Also make the prints use the struct names rather than the hardcoded ones,
forgot to add those to the last wayland_common commit.
iive agreed to relicense things that are still in mpv to LGPLv2.1. So
change the licenses of the affected files, and rename the configure
switch for LGPL mode to --enable-preliminary-lgpl2.
(The "preliminary" part will probably be removed from the configure
switch soon as well.)
Also player/main.c hasn't had GPL parts since a few commits ago.
The wayland code was written more than 4 years ago when wayland wasn't
even at version 1.0. This commit rewrites everything in a more modern way,
switches to using the new xdg v6 shell interface which solves a lot of bugs
and makes mpv tiling-friedly, adds support for drag and drop, adds support
for touchscreens, adds support for KDE's server decorations protocol,
and finally adds support for the new idle-inhibitor protocol.
It does not yet use the frame callback as a main rendering loop driver,
this will happen with a later commit.
The existing code in check_ext() avoided false positive due to
sub-strings, but allowed false negatives. Fix this with slightly better
search code, and make it available as function to other source files.
(There are some cases of strstr() still around.)
Unless FBOs are unsupported, this works. In particular, it's required to
get ICC profiles working in voluntary dumb mode. So instead of
blanket-disabling it, only disable it in the !have_fbo false case.
We allowed any input format that was generally supported by libva, but
this is probably nonsense, as the actual surface format was always fixed
to nv12. We would have to check whether libva can upload a given pixel
format to a nv12 surface. Or we would have to use a separate frame pool
for input surfaces with the exact sw_format - but then we'd also need to
check whether the vaapi VideoProc supports the surface type.
Hardcode nv12 and yuv420p as input formats, which we know can be
uploaded to nv12 surfaces. In theory we could get a list of supported
upload formats from libavutil, but that also require allocating a dummy
hw frames context just for the query.
Add a comment to the upload code why we can allocate an output surface
for input.
In the long run, we'll probably want to use libavfilter's vaapi
deinterlacer, but for now this would break at least user options.
The current check_va_status() function could probably be argued to be
derived from the original VAAPI's patch check_status() function, thus
GPL-only. While I have my doubts that it applies to an idiom on this
level, it's better to replace it. Similar idea, different expression
equals no copyright association.
An earlier commit message promised this, but it was forgotten.
Originally mpv vaapi support was based on the MPlayer-vaapi patches.
These were never merged in upstream MPlayer. The license headers
indicated they were GPL-only. Although the actual author agreed to
relicensing, the company employing him to write this code did not, so
the original code is unusable to us.
Fortunately, vaapi support was refactored and rewritten several times,
meaning little code is actually left. The previous commits removed or
moved that to GPL-only code. Namely, vo_vaapi.c remains GPL-only. The
other code went away or became unnecessary mainly because libavcodec
itself gained the ability to manage the hw decoder, and libavutil
provides code to manage vaapi surfaces. We also changed to mainly using
EGL interop, making any of the old rendering code unnecessary.
hwdec_vaglx.c is still GPL. It's possibly relicensable, because much of
it was changed, but I'm not too sure and further investigation would be
required. Also, this has been disabled by default for a while now, so
bothering with this is a waste of time. This commit simply disables it
at compile time as well in LGPL mode.
Done for license reasons. vo_vaapi.c is turned into some kind of
dumpster fire, and we'll remove it as soon as I'm mentally ready for
unkind users to complain about removal of this old POS.
This is for relicensing. Some of this code is loosely based on
vo_vaapi.c from the original MPlayer-vaapi patches. Most of the code has
changed, and only the initialization code and check_status() look
remotely similar. The initialization code is changed to be like Libav's
(hwcontext_vaapi.c). check_va_status() is just a C idiom, but to play it
safe, we'll either drop it from LGPL code (or recreate it).
vaapi.c still contains plenty of code from the original patches, but the
next commits will move them out of the LGPL code paths.
Seems to be fixed upstream in the nvidia driver, so it's probably a good
idea to 1. force the layout and 2. remove the warning, as it now
actually works. Users with older drivers would run into errors, but they
can still use shaderc as a replacement. (And it's not like the old
status quo was any better)
This was always set to the length of the VAO, but it should have been
set to the number of vertex attribs actually in use for this frame. No
idea how that managed to survive the test framework on nvidia/linux, but
ANGLE caught it.
This has several advantages:
1. no more redundant texcoords when we don't need them
2. no more arbitrary limit on how many textures we can bind
3. (that extends to user shaders as well)
4. no more arbitrary limits on tscale radius
To realize this, the VAO was moved from a hacky stateful approach
(gl_sc_set_vertex_attribs) - which always bothered me since it was
required for compute shaders as well even though they ignored it - to be
a proper parameter of gl_sc_dispatch_draw, and internally plumbed into
gl_sc_generate, which will make a (properly mangled) deep copy into
params.vertex_attribs.
FlagBits is just the name of the enum. The actual data type representing
a combination of these flags follows the *Flags convention. (The
relevant difference is that the latter is defined to be uint32_t instead
of left implicit)
For consistency, use *Flags everywhere instead of randomly switching
between *Flags and *FlagBits.
Also fix a wrong type name on `stageFlags`, pointed out by @atomnuker
Using renderpass layout transitions is more optimal and doesn't require
a redundant pipeline barrier.
Since our render passes are static and don't change throughout the
lifetime of a ra_renderpass, we unfortunately don't have much
flexibility here - so just hard-code SHADER_READ_ONLY_OPTIMAL as the
output format as this will be the most common case.
We also can't short-circuit the transition when we need to preserve the
framebuffer contents, since that depends on the current layout; so we
still use an explicit tex_barrier in this case. (Most optimal for this
scenario would be an input attachment anyway)