The active texture and some pixelstore parameters are now always reset
to defaults when entering and leaving the renderer. Could be important
for libmpv.
Commit 382bafcb changed the behavior for ab-loop-a. This commit changes
ab-loop-b so that the behavior is symmetric.
Adjust the OSD rendering accordingly to the two changes.
Also fix mentions of the "ab_loop" command to the now preferred
"ab-loop".
The check whether video is ready yet was done only in STATUS_FILLING.
But it also switched to STATUS_READY, which means the next time
fill_audio_out_buffers() was called, audio would actually be started
before video.
In most situations, this bug didn't show up, because it was only
triggered if the demuxer didn't provide video packets quickly enough,
but did for audio packets.
Also log when audio is started.
(I hate fill_audio_out_buffers(), why did I write it?)
Strictly schedule an update in regular intervals as long as either
stream cache or demuxer are prefetching. Don't update just always
because the stream cache is enabled ("idle != -1") or cache-related
properties are observed (mp_client_event_is_registered()).
Also, the "idle" variable was awkard; get rid of it with equivalent
code.
Calculate the buffering percentage in the same code which determines
whether the player is or should be buffering. In particular it can't
happen that percentage and buffering state are slightly out of sync due
to calling DEMUXER_CTRL_GET_READER_STATE and reusing it with the
previously determined buffering state.
Now it's also easier to guarantee that the buffering state is updated
properly.
Add some more verbose output as well.
(Damn I hate this code, why did I write it?)
demux_playlist.c recognizes if the source stream points to a directory,
and adds its directory entries. Until now, only 1 level was added.
During playback, further directory entries could be resolved as
directory paths were "played".
While this worked fine, it lead to frequent user confusion, because
playlist resuming and other things didn't work as expected. So just
recursively scan everything.
I'm unsure whether it's a good fix, but at least it gets rid of the
complaints. (And probably will add others.)
Commit a9bd4535 generally changed properties are set to string values.
This actually broke the fallback for non-string properties, because the
set string action was redirected directly to the property, instead of
the generic handler and its fallback code.
As of ffmpeg git master, only the libavdevice decklink wrapper supports
this. Everything else has dropped support.
You're now supposed to use AV_CODEC_ID_WRAPPED_AVFRAME, which works a
bit differently. Normal AVFrames should still work for these encoders.
And remove the same thing from the client API code.
The command.c code has to deal with many specialized M_PROPERTY_SET_*
actions, and we bother with a subset only.
If a mpv_node wrapped a string, the behavior was different from calling
mpv_set_property() with MPV_FORMAT_STRING directly. Change this.
The original intention was to be strict about types if MPV_FORMAT_NODE
is used. But I think the result was less than ideal, and the same change
towards less strict behavior was made to mpv_set_option() ages ago.
Since what we're doing is a linear blend of the four colors, we can just
do it for free by using GPU sampling.
This requires significantly fewer texture fetches and calculations to
compute the final color, making it much more efficient. The code is also
much shorter and simpler.
Now it will always be able to seek back to the start, even if the index
is sparse or misses the first entry.
This can be achieved by reusing the logic for incremental index
generation (for files with no index), and start time probing (for making
sure the first block is always indexed).
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
In the past, --video-unscaled also disabled zooming and aspect ratio
corrections. But this didn't make much sense in terms of being a useful
option. The new behavior just sets the initial video size to be
unscaled, but it's still affected by zoom commands and aspect ratio
corrections.
To get the old behavior back, --video-aspect=0 --video-zoom=0 need to be
added as well (in the general case). Most of the time it should not make
a difference though.
Also, there seems to have been some additional dst_rect clamping code
inside src_dst_split_scaling that didn't seem to either be necessary nor
ever get triggered. (The code immediately above it already makes sure to
crop the video if it's larger than the dst_rect)
No idea why it was there, but I just removed it.
Apply basic transformations like rotation by 90° and mirroring when
sampling from the source textures. The original idea was making this
part of img_tex.transform, but this didn't work: lots of code plays
tricks on the transform, so manipulating it is not necessarily
transparent, especially when width/height are switched. So add a new
pre_transform field, which is strictly applied before the normal
transform.
This fixes most glitches involved with rotating the image.
Cropping and rotation are now weirdly separated, even though they could
be done in the same step. I think this is not much of a problem, and
has the advantage that changing panscan does not trigger FBO
reallocations (I think...).