Check the maximum size of video surfaces, and refuse initialization if
the video is too large for them.
Maybe we could do something more sophisticated, like inserting a
software scaler. On the other hand, this would have a very questionable
benefit, as it would be guaranteed to be too slow.
When starting in paused mode, no audio is written to the device at all,
because writing audio implicitly unpauses the AO. If the file is very
small, and all audio fits within the AO buffer, this accidentally
triggered the EOF condition. (In unpaused mode, it would write all
audio, end playback, and then wait until the AO has everything played.)
This is better, because now we call swr_get_delay() with the output
samplerate, instead of with the input samplerate and then multiplying it
with the ratio and rounding it up.
Apparently, just running ./waf and hoping that it will be run with a
Python interpreter doesn't necessarily work. The workaround is pretty
simple and reliable.
The "osd" command cycles between 4 states (OSD level 0-3), which is
probably confusing and inconvenient. OSD levels 0 and 2 are rarely
needed. I would claim there is normally not much of a need to completely
disable OSD by setting level 0 during playback. Level 2 is just slightly
less information than level 3, and I'm not sure why it exists at all.
Change it so that it toggles between level 3 and 1. Note that this
ignores the default OSD level. If the default is 3, the first use of
this key will set it to 3 again. Just assume 1 is the default. If
someone complains, this could be improved.
There's a short time during loading where external commands can add
external streams even before the main file is loaded (like during ytdl
hook execution). The track list is printed every time an external track
is added via commands. This was quite awkward when ytdl was adding
multiple streams, so don't print it in this stage. They are printed
anyway at the end of the loading process.
Listening to kAudioDevicePropertyDeviceHasChanged does not send any
property change notifications when the device dies. Makes no sense,
but I suppose in CoreAudio logic a dead/removed device can't send
any notifications.
This caused the player to essentially pause playback if the audio
device was removed during playback.
Fix by listening to the kAudioHardwarePropertyDevices property too,
which will actually be sent in this specific case. Then, if
querying the already dead device fails, we know we have to reload.
In short, instead of letting the coreaudio property listener set atomic
flags (which are then polled), make the property listeners actually
active.
The format change listener used during audio output now simply calls
ao_request_reload() on its own. All code involved is thread-safe, so
there's no need to do it during this audio callback (we assumed the
callback was never run concurrently with itself).
The listener installed temporarily during ca_change_format() is changed
to post a semaphore. Get rid of the weird retry logic and replace it
with a flat loop + timeout. It appears the maximum wait time could be
2500ms; reduce the total timeout to 500ms instead.
This manually retrieved the remaining audio from the resampler. It
subtly missed a conversion which could leave to an unsubtle crash.
This could happen if reorder_planes() was supposed to insert NA
channels, and the resampler/actual output format were different.
Simplify it by reusing the normal drain path. One oddness is that
the filter will add an output frame outside of normal filtering,
but that should be fine.
This was missing for extended deinterlacer.
Unfortunately, these deinterlacer still do not work. The provided future
frame (which is all the deinterlacers want) seems to be correct, though.
One minor behavioral change is that this always keeps the previous frame
for PTS computations. This could be avoided (in order to keep exactly
the same behavior as before), but it seems more elegant and should not
do any harm. (Also, if we really cared about reducing hw frame refs,
a more worthy goal is producing the field output incrementally.)
The mapped data (pointed to by the param variable) is not needed before,
so the call can be moved down. Also, this prevents that the buffer
remains mapped forever if the other vaMapBuffer() call above fails (the
cleanup code forgets to unmap the buffer - this commit makes it
unnecessary).
This used a do-while loop, which runs only once, as replacement for a
cleanup goto. While this is ok, doing a goto directly is easier to
follow and is closer to idiomatic C. But mainly remove it so that the
indentation can be reduced.
Reconfiguring with the same video size should never cause the window to
resize back to the video size (if the user changed its size). This was
broken and it resized anyway.
We do not fill them, so we would pass random IDs to the driver. The code
was originally written to handle bob deinterlacing only, so I guess it
originally passed always 0 anyway, despite having code for reference
surface list allocation.
Also, move down the vaUnmapBuffer() call. This call actually "unmaps"
the param pointer, so accessing it after the unmap call would be
undefined behavior. The "example" in <va/vavpp.h> does this too, but
it's most likely an error.
(Additionally, not even bob deinterlacing worked correctly in my test,
sigh.)
Normally, the cache keeps 50% of the buffer for seeking backwards. Until
now, the cache just used the full buffer size at the beginning of a
file, because the 50% normally reserved for the backbuffer are unused.
This caused a problem: when streaming from http, the player would first
read about 150MB (default cache size), then stop until 75MB of the cache
has been played. (Until the 75MB position, the cache is fully used, so
nothing new can be read. After that, part of the backbuffer starts
getting unreserved, and can be used for readahead.) This long read pause
can cause the server to terminate the connection. Reconnecting may be
possible, but if youtube-dl is used, the media URL may have become
invalid.
Fix this by limiting readahead to 50% even if unnecessary. The only
exception is when the whole file would fit in the cache. In this case,
it won't matter if we can't reconnect, because the cache covers
everything anyway, and hopefully the cache will stay valid.
Likely fixes#2000.
This must have been some non-sense in the original vaapi mplayer patch.
While I still have no good idea what this "direct mapping" business is
about, it appears to be pretty much pointless. Nothing can hold
additional "real" surface references (due to how the API and mpv/lavc
refcounting work), so removing the additional surfaces won't break
anything. It still could be that this was for achieving additional
buffering (not reusing surfaces as soon), but we buffer some additional
data anyway. Plus, the original intention of the vaapi mplayer code was
probably increasing surface count just by 1 or 2, not actually doubling
it, and/or it was a "trick" to get to the maximum count of 21 when h264
is in use.
gstreamer-vaapi uses "ref_frames + SCRATCH_SURFACES_COUNT" here, with
SCRATCH_SURFACES_COUNT defined to 4. It doesn't appear to check the
overlay attributes at all in the decoder.
In any case, remove this non-sense.
It polluted the global namespace, instead of exporting the function
properly.
For now, keep it compatible by explicitly keeping the bogus export.
Also fix a mistake in the manpage example.
On hw decoder reinit failure we did not actually always return a sw
format, because the first format (fmt[0]) is not always a sw format.
This broke some cases of fallback. We must go through the trouble to
determine the first actual sw format.
This reduces spam while preempted a bit.
The remaining message, "hardware accelerator failed to decode picture"
on every frame, can not be prevented because it's hardcoded in
libavcodec.
If gl_hwdec_driver.map_image fails, all textures will be set to 0. This
in turn makes pass_prepare_src_tex() skip generation of the texture
uniforms, which leads to a shader compilation error, as e.g. texture0 is
not defined but expected to exist and accessed.
Set the textures to an invalid non-0 ID instead. OpenGL can deal with
it.
Yet another of these dozens of hwaccel changes. This time, libavcodec
provides utility functions, which initialize the vdpau decoder and map
codec profiles. So a lot of work the API user had to do falls away.
This also will give us support for high bit depth profiles, and possibly
HEVC once libavcodec supports it.
...instead of relying on the hw decoding API to align it for us. The old
method could in theory have gone wrong if the video is cropped by an
amount large enough to step over several blocks.
Always configure the vdpau mixer based on the current surface sent to
it. Before this, we just hardcoded the chroma type, and the surface size
was essentially a guess.
Calling VdpVideoSurfaceGetParameters() on every surface is a bit
suspicious, but it appears it's a cheap function (just requiring some
locks and a table lookup). This way we avoid creating another
complicated mechanism to carry around the actual surface parameters
with a mp_image/AVFrame.