It seems like many GL implementations (including Mesa) choke on this,
while others are fine. We still think that this use of the GL API is
allowed by the standard (at least in the Mesa case), so to reduce
confusion, explicitly check the "controversial" calls, and use an
appropriate error message.
Like it's done for audio and video. Just to be uniform.
I'm sorry for deleting the anti-ffmpeg vitriol. It's still all true, but
since we decided to always set the timebase, the crappiness is isolated
to FFmpeg internals.
These are different AVCodecContext fields. pkt_timebase is the correct
one for identifying the unit of packet/frame timestamps when decoding,
while time_base is for encoding. Some decoders also overwrite the
time_base field with some unrelated codec metadata.
pkt_timebase does not exist in Libav, so an #if is required.
Normally, OSD can be disabled with --osd-level=0. But this also disables
terminal OSD, and some users want _only_ the terminal OSD. Add
--video-osd=no, which essentially disables the video OSD.
Ideally, it should probably be possible to control terminal and video
OSD levels independently, but that would require separate OSD timers
(and other state) for both components, so don't do it. But because the
current situation isn't too ideal, add a threat to the manpage that
might be changed in the future.
Fixes#3387.
This is for text subtitles. libavformat currently always reads text
subtitles completely on init. This means the underlying stream is
useless and will consume resources for various reasons (network
connection, file handles, cache memory).
Take care of this by closing the underlying stream if we think the
demuxer has read everything. Since libavformat doesn't export whether it
did (or whether it may access the stream again in the future), we rely
on a whitelist. Also, instead of setting the stream to NULL or so, set
it to an empty dummy stream. This way we don't have to litter the code
with NULL checks.
demux_lavf.c needs extra changes, because it tries to do clever things
for the sake of subtitle charset conversion.
The main reason we keep the demuxer etc. open is because we fell for
libavformat being so generic, and we tried to remove corresponding
special-cases in the higher-level player code. Some of this is forced
due to ass/srt mkv/mp4 demuxing being very similar to external text
files. In the future it might be better to do this in a more
straight-forward way, such as reading text subtitles into libass and
then discarding the demuxer entirely, but for aforementioned reasons
this could be more of a mess than the solution introduced by this
commit.
Probably fixes#3456.
Cleaner and makes it easier to change the underlying stream.
mp_property_stream_capture() still directly accesses it directly via
demux_run_on_thread(). This is evil, but still somewhat sane and is not
getting into the way here.
Not sure if I got all field accesses.
It's just wasted memory.
One corner case is when a file grows during playback, but this is rare
and usually happens on-disk only. The cache size was generally limited
before this change already, so no reason to care.
As an unrelated change, move the cache size info to the resize_cache()
function. There's really no reason not to do this, and it's slightly
more informative if the user changes the cache size at runtime.
Because VOCTRL_CHECK_EVENTS is processed asynchronously (as of 088a007,)
the GUI thread no longer gets regular wakeups, so the old check that
made sure the video window matched the parent window's size in --wid
embedding mode did not run very often. This made --wid embedding not
very usable.
Instead of polling for window size changes, use Windows hooks to react
to them when they happen. When the parent window is owned by the same
process as the video window, use a WH_CALLWNDPROC hook. When the parent
window is not owned by the same process, WinEvents must be used, which
are not as smooth, but still work for this purpose.
Since neither SetWindowsHookEx nor SetWinEventHook take a context
parameter to send data to the hook function, the hook functions must
find the child window by its class instead, so there are a few changes
to ensure this is fast and the class is unique.
This also fixes up the logic to handle window destruction. When a parent
window is destroyed, its children are also destroyed, so this gives us a
way to react to parent window destruction without polling.
If the video has the same size as the screen, starting with --fs and
then leaving fullscreen doesn't actually leave fullscreen.
The reason is that mpv tries to restore the previous window size if
necessary (otherwise, you'd end up with a Window of nearly the same size
as the screen with some WMs). It will typically restore with the
rectangle set exactly to the screen if no other position or size is
forced. This triggers pre-EWMH fullscreen mode, which WMs detect using
various heuristics.
Apparently we triggered this with mutter (but strangely no other WMs).
It's possible that pre-EWMH fullscreen mode actually requires removing
decorations, and mutter either ignores this. But this is speculation and
I haven't checked.
Work this around by reducing the requested size by 1 pixel if it
happens.
This was observed with mutter 3.18.2.
Fixes#2072.
If spdif is enabled, the channel layout has no meaning other than
setting the number of channels. The number of channels must be fixed to
achieve the exact bitrate required.
Fixes#3445.
It doesn't necessarily have to mean anything bad.
We're still too lazy to provide any more detailed information (e.g.
whether this happened to likely bad interleaving, excessive amount of
packets like with some ASS subs, or that the readahead user option is
limitted by the packet queue size).
This should actually be rather safe - we already check whether the
estimated value jitters less than the (possibly untrustworthy) nominal
one. Remove a "safety" check that disabled this code for small
deviations, and make it trigger sooner into playback. Also lower the log
level of messages about using the estimated display FPS down to verbose.
Normally there's another mechanism for smoothing out minor estimation
differences, but that is not good enough here.
This possibly improves behavior as reported in #3433, which can be
reproduced with --vo=null:fps=48.426 --display-fps=48 (though it doesn't
consider the jitter introduced by a real VO).
Doing this required synchronizing with the VO thread, which could lead
to audio dropouts if the VO was frozen (which can happen in practice if
e.g. an opengl_cb user is not doing what the API demands).
Add a way to send asynchronous VOCTRLs, and use that for the playback
state. In theory, it would be better to make this status update a
several function and to "merge" several queued update, but that would be
slightly more effort/code, and the update is so infrequent that the
merging would never happen anyway.
The change to vo_destroy() is to make sure all queued asynchronous
reuqests are finished before making the vo_thread exit.
Even though it's only used on MS Windows, it's run on any platform with
any VO, which makes this worse.
run_control() dereferences an uint32_t as int. Whether this is allowed
depends on what uint32_t is typedefed to (dereferencing an unsigned int
as int should be fine). Fix it by always using int. The uint32_t type
never really made sense.
There are situations where the resampler is destroyed and recreated
during playback. If recreating the resampler unexpectedly fails, the
filter function is supposed to return an error. This wasn't done
correctly, because get_out_samples() accessed the resampler before the
check. Move the check up to fix this.
When fetching the playlist property, playlist_entry_from_index would be
called for each playlist entry, which traversed a linked list to get the
entry corresponding to the specified index. This was very slow for large
playlists. Since get_playlist_entry is called for each index in order,
it can avoid a full traversal of the linked list by using the next
pointer on the previously requested entry.
Instead of passing through double float timestamps opaquely, pass real
timestamps. Do so by always setting a valid timebase on the
AVCodecContext for audio and video decoding.
Specifically try not to round timestamps to a too coarse timebase, which
could round off small adjustments to timestamps (such as for start time
rebasing or demux_timeline). If the timebase is considered too coarse,
make it finer.
This gets rid of the need to do this specifically for some hardware
decoding wrapper. The old method of passing through double timestamps
was also a bit questionable. While libavcodec is not supposed to
interpret timestamps at all if no timebase is provided, it was
needlessly tricky. Also, it actually does compare them with
AV_NOPTS_VALUE. This change will probably also reduce confusion in the
future.
This affects A-B loops and --loop-file, and audio. Instead of dropping
audio by resetting the AO, try to make it seamless by not sending data
after the loop point, and after the seek send new data without a reset.
The code actually kept going out of EOF mode into resync mode back into
EOF mode when the playloop had to wait after an audio EOF caused by the
endpts. This would break seamless looping (as added by the next commit).
Apply endpts earlier, to ensure the filter_audio() function always
returns AD_EOF in this case.
The idiotic ao_buffer makes this an amazing pain in the ass.