This exposes whether a video track is detected as an image. This is
useful for profile conditions, property expansion and lavfi-complex, and
is more accurate than any detection even Lua scripts can perform, since
they can't differentiate between images and videos without container-fps
and audio and with duration 1 (which is the duration set by the mf
demuxer with the default --mf-fps=1).
The lavf demuxer image check is moved to where the number of frames is
available for comparison, and is modified to check the number of frames
and duration instead of the video codec. This doesn't misdetect videos
in a codec commonly used for images (e.g. mjpeg) as images, and can
detect images in a codec commonly used for videos (e.g. 1-frame gifs).
pix files are also now detected as images, while before they weren't
since the condition was checking if the AVInputFormat name ends with
_pipe, and alias_pix doesn't.
Both nb_frames and codec_info_nb_frames are checked because nb_frames is
0 for some video codecs (hevc, av1, vc1, mpeg1video, vp9 if forcing
--demuxer=lavf), and codec_info_nb_frames is 1 for others (mpeg, mpeg4,
wmv3).
The duration is checked as well because for some uncommon codecs and
containers found in FFMpeg's FATE suite, libavformat returns nb_frames =
0 and codec_info_nb_frames = 1. For some of them it even returns
duration = 0, so they are blacklisted in order to never be considered
images.
The extra codecs that would have to be blacklisted without checking the
duration are AV_CODEC_ID_4XM, AV_CODEC_ID_BINKVIDEO,
AV_CODEC_ID_DSICINVIDEO, AV_CODEC_ID_ESCAPE130, AV_CODEC_ID_MMVIDEO,
AV_CODEC_ID_NUV, AV_CODEC_ID_RL2, AV_CODEC_ID_SMACKVIDEO and
AV_CODEC_ID_XAN_WC3, while the containers are film-cpk, ivf and ogg.
The lower limit for duration is 10 because that's the duration of
1-frame gifs.
Streams with codec_info_nb_frames 0 are not considered images because
vp9 and av1 have nb_frames = 0 and codec_info_nb_frames = 0, and we
can't rely on just the duration to detect them because they could be
livestreams without an initial duration, and actually even if we could
for these codecs libavformat returns huge negative durations like
-9223372036854775808.
Some more images in the FATE suite that are really frames cut from a
video in an uncommon codec and container, like cine/bayer_gbrg8.cine,
could be detected by allowing codec_info_nb_frames = 0, but then any
present and future video codec with nb_frames = 0 and
codec_info_nb_frames = 0 would need to be added to the blacklist. Some
even have duration > 10, so to detect these images the duration check
would have to be removed, and all the previously mentioned extra codecs
and containers would have to be added added to the blacklists, which
means that images that use them (if they exist anywhere) will never be
detected. These FATE images aren't detected as such by mediainfo either
anyway, nor can a Lua script reliably detect them as images since they
have container-fps and duration > 0 and != 1, and you probably will
never see files like them anywhere else.
For attached pictures the lavf demuxer always set image to true, which
is necessary because they have duration > 10. There is a minor change in
behavior for which audio with attached pictures now has mf-fps as
container-fps instead of unavailable, but this makes it consistent with
external cover art, which was already being assigned mf-fps.
When the lavf demuxer fails, the mf one guesses if the file is an image
by its extension, so sh->image is set to true when the mf demuxer
succeds and there's only one file.
Even if you add a video's file type to --mf-type and open it with the mf
protocol, only the first frame is used, so setting image to true is
still accurate.
When converting an image to the extensions listed in demux/demux_mf.c,
tga and pam files are currently the only ones detected by the mf demuxer
rather than lavf. Actually they are detected with the image2 format, but
it is blacklisted; see d0fee0ac33.
The mkv demuxer just sets image to true for any attached picture.
The timeline demuxer just copies the value of image from source to
destination. This sets image to true for attached pictures, standalone
images and images added with !new_stream in EDL playlists, but it is
imperfect since you could concatenate multiple images in an EDL playlist
(which should be done with the mf demuxer anyway). This is good enough
anyway since the comment of the modified function already says it is
"Imperfect and arbitrary".
This allows configuring which options are saved by quit-watch-later.
Fixes#4126, #4641 and #5567.
Toggling a video or audio filter twice would treat the option as changed
because the backup value is NULL, and the current value of vf/af is a
list with one empty item, so obj_settings_list_equal had to be changed.
Now loading cover art through mp_add_external_file requires an
additional argument to be set to true. This way not all video-add
commands end up being marked as cover art when they move through
mp_add_external_file, as originally changed in 55d7f9ded1 .
Additionally, this lets us clean up some logic that would otherwise be
duplicated between open_external_files and autoload_external_files, if
the logic had been kept split from mp_add_external_file.
Fixes#8358
This introduces the delete-watch-later-config command, to complement
write-watch-later-config. This is an alternative to #8141.
The general problem that this change is attempting to help solve has
been described in #336, #3169 and #6574. Though persistent playback
position of a single file is generally a solved problem, this is not
the case for playlists, as described in #8138.
The motivation is facilitating intermittent playback of very large
playlists, consisting of hundreds of entries each many hours
long. Though the current "watch later" mechanism works well - provided
that the files each occur only once in that playlist, and are played
only via that playlist - the biggest issue is that the position is
lost completely should mpv exit uncleanly (e.g. due to a power
failure). Existing workarounds (in the form of Lua scripts which call
write-watch-later-config periodically) fail in the playlist case, due
to the mechanism used by mpv to determine where within a playlist to
resume playback from.
The missing puzzle piece needed to allow scripts to implement a
complete solution to this problem is simply a way to clean up the
watch-later configuration that the script asked mpv to write using
write-watch-later-config. With that in place, scripts can then
register an end-file event listener, check the stop playback reason,
and in the "eof" and "stop" case, invoke delete-watch-later-config to
delete any saved positions written by write-watch-later-config. The
script can then proceed to immediately write a new one when the next
file is loaded, which altogether allows mpv to resume from the correct
playlist and file position upon next startup.
Because events are delivered and executed asynchronously,
delete-watch-later-config takes an optional filename argument, to
allow scripts to clear watch-later configuration for files after mpv
had already moved on from playing them and proceeded to another file.
A Lua script which makes use of this change can be found here:
https://gist.github.com/CyberShadow/2f71a97fb85ed42146f6d9f522bc34ef
(A modification of the one written by @Hakkin, in that this one takes
advantage of the new command, and also saves the state immediately
when a new file is loaded.)
Since this is a messy and fragile mechanism, I want it logged (even if
it's somewhat in conflict with the verbose logging policy). On the other
hand, it's unconditionally logged on every playloop iteration. So add
some nonsense to log it only on progress.
This replaces the two buffers (ao_chain.ao_buffer in the core, and
buffer_state.buffers in the AO) with a single queue. Instead of having a
byte based buffer, the queue is simply a list of audio frames, as output
by the decoder. This should make dataflow simpler and reduce copying.
It also attempts to simplify fill_audio_out_buffers(), the function I
always hated most, because it's full of subtle and buggy logic.
Unfortunately, I got assaulted by corner cases, dumb features (attempt
at seamless looping, really?), and other crap, so it got pretty
complicated again. fill_audio_out_buffers() is still full of subtle and
buggy logic. Maybe it got worse. On the other hand, maybe there really
is some progress. Who knows.
Originally, the data flow parts was meant to be in f_output_chain, but
due to tricky interactions with the playloop code, it's now in the dummy
filter in audio.c.
At least this improves the way the audio PTS is passed to the encoder in
encoding mode. Now it attempts to pass frames directly, along with the
pts, which should minimize timestamp problems. But to be honest, encoder
mode is one big kludge that shouldn't exist in this way.
This commit should be considered pre-alpha code. There are lots of bugs
still hiding.
This is taken from a somewhat older proof-of-concept script. The basic
idea, and most of the implementation, is still the same. The way the
profiles are actually defined changed.
I still feel bad about this being a Lua script, and running user
expressions as Lua code in a vaguely defined environment, but I guess as
far as balance of effort/maintenance/results goes, this is fine.
It's a bit bloated (the Lua scripting state is at least 150KB or so in
total), so in order to enable this by default, I decided it should
unload itself by default if no auto-profiles are used. (And currently,
it does not actually rescan the profile list if a new config file is
loaded some time later, so the script would do nothing anyway if no auto
profiles were defined.)
This still requires defining inverse profiles for "unapplying" a
profile. Also this is still somewhat racy. Both will probably be
alleviated to some degree in the future.
This simply printf()s a concatenation of the provided string and the
relevant escape sequences. No idea what exactly defines this escape
sequence (is it just a xterm thing that is now supported relatively
widely?), and this simply uses information provided on the linked github
issue.
Not much of an advantage over --term-status-msg, though at least this
can have a lower update frequency. Also I may consider setting a default
value, and then it shouldn't conflict with the status message.
Fixes: #1725
Apparently, this was a bit of a mess, which caused the bug fixed by
commit ec7f2388af. Try to improve this, and only use track selection
entries that exist.
Scripts such as the OSC can be loaded and unloaded at runtime by
toggling the option that enables them. (It even works, although normally
it's only used to control initial loading.)
Unloading was racy because it used the client name; fix this.
The load-script change is an accidental feature. And probably useless.
If the user manages to run a "loadfile x append" command before the loop
in mp_play_files() is entered, then the player could start playing
these. This isn't expected, because appending files to the playlist in
idle mode does not normally start playback. It could happen because
there is a short time window where commands are processed before the
loop is entered (such as running the command when a script is loaded).
The idle mode semantics are pretty weird: if files were provided in
advance (on the command line), then these should be played immediately.
But if idle mode was already entered, and something is appended to the
playlist using "append", i.e. without explicitly triggering playback,
then it should remain in idle mode.
Try to follow this by redefining PT_STOP to strictly mean idle mode.
Remove the playlist->current check from idle_loop(), since only the
stop_play field counts now (cf. what mp_set_playlist_entry() does).
This actually introduces the possibility that playlist->current, and
with it playlist-pos, are set to something, even though playback is not
active or being started. Previously, this was only possible during state
transitions, such as when changing playlist entries.
Very annoyingly, this means the current way MPV_EVENT_IDLE was sent
doesn't work anymore. Logically, idle mode can be "active" even if
idle_loop() was not entered yet (between the time after mp_initialize()
and before the loop in mp_play_files()). Instead of worrying about this,
redo the "idle-active" property, and deprecate the event.
See: #7543
When the demuxer cache read until the end of the stream, and was
finished and completely inactive, the cache properties were not updated
anymore via MP_EVENT_CACHE_UPDATE.
Unfortunately, many cache properties depend on the current playback
position, such as cache-duration or fw-bytes. This is especially visible
on the OSC. If everything was cached, seeking around didn't update the
displayed forward cache duration.
That means checking demuxer_reader_state.idle is not enough. You also
need to check whether the current playback position changed.
Fix this by explicitly using the current playback position, and update
the properties if it changed "enough". "Enough" is 1 second of media
time in this example, which may or may not be appropriate.
In general, this could probably be done better. There are many other
triggers that change the cache state and that are not covered. For now
I'm content with getting rid of the obvious problems.
I think the OSC problem in particular was caused by changing it from
polling to using property change notifications.
This was a hack that attempted to line up external audio tracks with
video. The problem is that if you do a keyframe seek backwards, video
will usually seek much farther back than audio (due to much higher
keyframe aka seek point distances). The hack somehow made seeking a 2
step process.
This existed in 4 different forms in the history of this code base, and
it was always very cumbersome. We mostly needed this for ytdl_hook (I
think?), which uses the 4th form, which is nicely confined to
demux_timeline and is unrelated to the "external" audio tracks in the
high level player.
Since this is (probably) not really widely needed anymore, get rid of
it. Better do this now, than when somehow rewriting all the seeking code
(which might happen in this decade or the next or so) and when it
wouldn't be easily revertable anymore in case we find we "really" need
it unlike expected.
There is no issue if hr-seeks are used. Also, you can still use edl
files to "bundle" multiple streams as if it was a single stream (this is
what ytdl_hook does now).
Try to deal with various corner cases. But when I fix one thing, another
thing breaks. (And it's 50/50 whether I find the breakage immediately or
a few months later.) So results may vary.
The default for--hr-seek is changed to "default" (not creative enough to
find a better name). In this mode, audio seeking is exact if there is no
video, or if the video has only a single frame. This change is actually
pretty dumb, since audio frames are usually small enough that exact
seeking does not really add much. But it gets rid of some weird special
cases.
Internally, the most important change is that is_coverart and is_sparse
handling is merged. is_sparse was originally just a special case for
weird .ts streams that have the corresponding low-level flag set. The
idea is that they're pretty similar anyway, so this would reduce the
number of corner cases. But I'm not sure if this doesn't break the
original intended use case for it (I don't have a sample anyway).
This changes last-frame handling, and respects the duration of the last
frame only if audio is disabled. This is mostly "coincidental" due to
the need to make seeking past EOF trigger player exit, and is caused by
setting STATUS_EOF early. On the other hand, this might have been this
way before (see removed chunk close to it).
Hr-seek past the last frame instantly enters EOF, which means
handle_playback_time() will not set playback_pts to the video PTS (as
all video frames are skipped), which leads to the playback time being
taken from the last seek target. This results in confusing behavior,
especially since the seek time will be clipped to the file duration for
display, but not for further relative seeks.
Obviously, the time should be set to the last video frame, so use the
last video frame as fallback if both audio and video have ended. Also,
since the same problem exists with audio-only playback, add a fallback
for audio PTS too. We don't know which was the "last" fragment of media
played (to decide whether to use the audio or video PTS as the
fallback), but it doesn't matter since the maximum works.
This could lead to some undesired effects. In particular the audio PTS
is basically a bad guess, and is for example not clipped against --end.
(But the ridiculous way audio syncing and clamping currently works, I'm
not going to touch that shit unless I rewrite it completely.) The cover
art case is slightly broken: using --keep-open with keyframe seeks will
result in 0 as playback PTS (the video PTS). OK, who cares, it got late.
Also casually get rid of last_vo_pts, since that barely made any sense
at all.
Fixes: #7487
This is just a more convenient way to start IPC client scripts per mpv
instance.
Does not work on Windows, although it could if the subprocess and IPC
parts are implemented (and I guess .exe/.bat suffixes are required).
Also untested whether it builds on Windows. A lot of other things are
untested too, so don't complain.
The intention is to provide a slightly nicer way to distribute scripts.
For example, you could put multiple source files into the directory, and
then import them from the actual script file (this is still
unimplemented).
At first I wanted to require a config file (because you need to know at
least which scripting backend it should use). This wouldn't have been
too hard (could have reused/abused the mpv config file parsing
mechanism, and I already had working code that was just 2 function
calls). But probably better to do this without new config files, because
it might become a pain in the distant future.
So this just probes for "main.lua", "main.js", etc., until an existing
file is found.
Another important change is that this skips all directory entries whose
name starts with ".". This automatically excludes the "." and ".."
special directories, and is probably useful to exclude random crap that
might be lying around in the directory (such as editor temporary files,
or OSX, in its usual hrmful, annoying, and idiotic modus operandi,
sharting all over any directories opened by "Finder").
Although the changelog mentions the docs, they're added only in a later
commit.
It's ridiculous that --script=something.dumb does not cause an error.
Make it error, and extend this behavior to the scripts/ sub-dir in the
mpv config dir.
The VO underrun detection (just a weak heuristic) added in commit f26dfb
flagged the underrun state every time it was checked, and since the
check happened in every playloop iteration, this caused the playloop to
wake up itself on every iteration. It burned an entire core while in
this state.
Fix this by flagging this condition only once (as it should be), and
requiring that a frame is displayed to trigger it again. This makes it
work similar as the audio underrun check.
The bug report referenced below says --demuxer-thread=no avoided this.
This is because the demuxer layer doesn't do proper underrun reporting
if the reader thread is disabled.
Fixes: #7259
This is preparation to get rid of the option-to-property bridge
(mp_on_set_option). This is a pretty insane thing that redirects
accesses to options to properties. It was needed in the ever ongoing
transition from something to... something else.
A good example for the need of this bridge is applying profiles at
runtime. This obviously goes through the config parser, but should also
make all changes effective, for which traditionally the property layer
is used.
There isn't much left that needs this bridge. This commit changes a
bunch of options (which also have a property implementation) to use
option change notifications instead. Many of the properties are still
left, but perform unrelated functions like OSD formatting.
This should be mostly compatible. There may be some subtle behavior
changes. For example, "hwdec" and "record-file" do not check for changes
anymore before applying them, so writing the current value to them
suddenly does something, while it was ignored before.
DVB changes untested, but should work.
On a audio/video desync by more than 0.5 seconds, display-sync mode was
disabled, and not enabled again (until playback restart, e.g. a seek).
The idea was that it this only happens when this playback mode is broken
and can't perform well anyway (A/V desync is a clear indication that
something is very wrong). Instead of behaving like a god damn POS, it
should revert to the more robust audio-sync mode.
Unfortunately, this could happen sporadically due to temporary system
performance problems, such as toggling fullscreen. Users didn't like
this, and asked for a function to disable it, or to recover in some
other way.
This mechanism is questionable anyway. If an ignorant user enables
display-sync, and encounters problems with it (without being able to
determine that display-sync is messing up), the player will still behave
like a POS on every playback, and even after every seek. It might
actually be helpful to fail more consistently. Also, I've found that
it's sill relatively reliable anyway even without this mechanism.
So just remove the fallback.
Fixes: #7048
The --cache-pause feature (enabled by default) will pause playback for a
while if network runs out of data. If this is not done, then playback
will go on frame-wise (as packets are slowly read from the network and
then instantly decoded and displayed). This feature is actually useless,
as you won't get nice playback no matter what if network is too slow,
but I guess I still prefer this behavior for some reason.
This commit changes this behavior from using the demuxer cache state
only, to trying to use underrun information from the AO/VO. This means
if you have a very large audio buffer, then cache-pausing will trigger
once that buffer is depleted, which will be some time _after_ the
demuxer cache has run out.
This requires explicit support from the AO. Otherwise, the behavior
should be mostly the same as before this commit.
This does not care about the AO buffer. In theory, the AO may underrun,
then the player will write some data to the AO buffer, then the AO will
recover and play this bit of data, then the player will probably trigger
the cache-pause behavior. The probability of this happening should be
pretty low, so I will hold off fixing this until the next refactor of
the AO chain (if ever).
The VO underflow detection was devised and tested in 5 minutes, and may
not be correct. At least I'm fairly sure that the combination of all the
factors should make incorrect behavior relatively unlikely, but problems
are possible.
Also, the demux_reader_state.underrun field may be inaccurate. It's only
the present state at the time demux_get_reader_state() was called, and
may exclude past underruns. In theory, this could cause "close" cases to
be missed. Then you might get an audio underrun without cache-pausing
acting on it. If the stars align, this could happen multiple times in
the row, effectively making this feature not work.
The most user-visible consequence of this change is that the user
will now see an AO underrun warning every time the cache runs out.
Maybe this cache-pause feature should just be removed...
demux_start_prefetch() was called unconditionally in two cases. This is
completely wrong. I'm not sure what part of my brain died off that
something this obviously wrong went in.
The prefetch case is a bit more complicated. It's a different thread, so
you can't access just access mpctx->opts there. So add an explicit field
for this, which is the simplest way to get this done. (Even if it's bad
factoring.)
Fixes: c1f1a0845e
Fixes: 556e204a11
That's right, and it's probably not the end of it. I'll just claim that
I have no idea how to create a proper user interface for this, so I'm
creating multiple partially-orthogonal, of which some may work better in
each of its special use cases.
Until now, there was --record-file. You get relatively good control
about what is muxed, and it can use the cache. But it sucks that it's
bound to playback. If you pause while it's set, muxing stops. If you
seek while it's set, the output will be sort-of trashed, and that's by
design.
Then --stream-record was added. This is a bit better (especially for
live streams), but you can't really control well when muxing stops or
ends. In particular, it can't use the cache (it just dumps whatever the
underlying demuxer returns).
Today, the idea is that the user should just be able to select a time
range to dump to a file, and it should not affected by the user seeking
around in the cache. In addition, the stream may still be running, so
there's some need to continue dumping, even if it's redundant to
--stream-record.
One notable thing is that it uses the async command shit. Not sure
whether this is a good idea. Maybe not, but whatever. Also, a user can
always use the "async" prefix to pretend it doesn't.
Much of this was barely tested (especially the reinterleaving crap),
let's just hope it mostly works. I'm sure you can tolerate the one or
other crash?
Obviously should seek back to the end of the file when it loops.
Also remove some minor code duplication around start times. This isn't
the correct solution by the way. Rather than hoping we know a reasonable
start/end time, this stuff should instruct the demuxer to seek to the
exact location. It'll work with 99% of all normal files, but add an
appropriate comment (that basically says the function is bullshit) to
get_start_time() anyway.
This changes the behavior of the --ab-loop-a/b options. In addition, it
makes it work with backward playback mode.
The most obvious change is that the both the A and B point need to be
set now before any looping happens. Unlike before, unset points don't
implicitly use the start or end of the file. I think the old behavior
was a feature that was explicitly added/wanted. Well, it's gone now.
This is because of 2 reasons:
1. I never liked this feature, and it always got in my way (as user).
2. It's inherently annoying with backward playback mode.
In backward playback mode, the user wants to set A/B in the wrong order.
The ab-loop command will first set A, then B, so if you use this command
during backward playback, A will be set to a higher timestamps than B.
If you switch back to forward playback mode, the loop would stop
working. I want the loop to just continue to work, and the chosen
solution conflicts with the removed feature.
The order issue above _could_ be fixed by also switching the AB-loop
user option values around on direction switch. But there are no other
instances of option changes magically affecting other options, and doing
this would probably lead to unexpected misery (dying from corner cases
and such).
Another solution is sorting the A/B points by timestamps after copying
them from the user options. Then A/B options set in backward mode will
work in forward mode. This is the chosen solution. If you sort the
points, you don't know anymore whether the unset point is supposed to
signify the end or the start of the file.
The AB-loop code is slightly better abstracted now, so it should be easy
to restore the removed feature. It would still require coming up with a
solution for backwards playback, though.
A minor change is that if one point is set and the other is unset, I'm
rendering both the chapter markers and the marker for the set point.
Why? I don't know. My test file had chapters, and I guess I decided this
looked better.
This commit also fixes some subtle and obvious issues that I already
forgot about when I wrote this commit message. It cleans up some minor
code duplication and nonsense too.
Regarding backward playback, the code uses an unsanitary mix of internal
("transformed") and user timestamps. So the play_dir variable appears
more than usual.
To mention one unfixed issue: if you set an AB-loop that is completely
past the end of the file, it will get stuck in an infinite seeking loop
once playback reaches the end of the file. Fixing this reliably seemed
annoying, so the fix is "just don't do this". It's not a hard freeze
anyway.
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)
(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)
How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.
The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).
Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).
The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.
Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.
E.g.:
bool before = pts_a < pts_b;
would need to be:
bool before = forward
? pts_a < pts_b
: pts_a > pts_b;
or:
bool before = pts_a * dir < pts_b * dir;
or if you, as it's implemented now, just do this after decoding:
pts_a *= dir;
pts_b *= dir;
and then in the normal timing/renderer code:
bool before = pts_a < pts_b;
Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.
Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.
As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)
VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.
FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.
The demuxer cache is the only cache now. Might need another change to
combat seeking failures in mp4 etc. The only bad thing is the loss of
cache-speed, which was sort of nice to have.
The player fully restarts playback when the edition or disk title is
changed. Before this, the player tried to reinitialized playback
partially. For example, it did not print a new "Playing: <file>"
message, and did not send playback end to libmpv users (scripts or
applications).
This playback restart code was a bit messy and could have unforeseen
interactions with various state. There have been bugs before. Since it's
a mostly cosmetic thing for an obscure feature, just change it to a full
restart. This works well, though since it may have consequences for
scripts or client API users, mention it in interface-changes.rst.
Before this, mpctx->playing was often used to determine whether certain
new state could be added to the playback state. In particular this
affected external files (which added tracks and demuxers). The variable
was checked to prevent that they were added before the corresponding
uninit code. We want to make a small part of uninit asynchronous, but
mpctx->playing needs to stay in the place where it is. It can't be used
for this purpose anymore.
Use mpctx->stop_play instead. Make it never have the value 0 outside of
loading/playback. On unloading, it obviously has to be non-0.
Change some other code in playloop.c to use this, because it seems
slightly more correct. But mostly this is preparation for the following
commit.
The player fully restarts playback when the edition or disk title is
changed. Before this, the player tried to reinitialized playback
partially. For example, it did not print a new "Playing: <file>"
message, and did not send playback end to libmpv users (scripts or
applications).
This playback restart code was a bit messy and could have unforeseen
interactions with various state. There have been bugs before. Since it's
a mostly cosmetic thing for an obscure feature, just change it to a full
restart. This works well, though since it may have consequences for
scripts or client API users, mention it in interface-changes.rst.
Until now, they could be aborted only by ending playback, and calling
mpv_abort_async_command didn't do anything.
This requires furthering the mess how playback abort is done. The main
reason why mp_cancel exists at all is to avoid that a "frozen" demuxer
(blocked on network I/O or whatever) cannot freeze the core. The core
should always get its way. Previously, there was a single mp_cancel
handle, that could be signaled, and all demuxers would unfreeze. With
external files, we might want to abort loading of a certain external
file, which automatically means they need a separate mp_cancel. So give
every demuxer its own mp_cancel, and "slave" it to whatever parent
mp_cancel handles aborting.
Since the mpv demuxer API conflates creating the demuxer and reading the
file headers, mp_cancel strictly need to be created before the demuxer
is created (or we couldn't abort loading). Although we give every
demuxer its own mp_cancel (as "enforced" by cancel_and_free_demuxer),
it's still rather messy to create/destroy it along with the demuxer.
This affects async commands started by client API, commands with async
capability run in a sync way by client API (think mpv_command_node()
with "subprocess"), and detached async work.
Since scripts might want to do some cleanup work (that might involve
launching processes, don't ask), we don't unconditionally kill
everything on exit, but apply an arbitrary timeout of 2 seconds until
async commands are aborted.
Many asynchronous commands are potentially long running operations, such
as loading something from network or running a foreign process.
Obviously it shouldn't just be possible for them to freeze the player if
they don't terminate as expected. Also, there will be situations where
you want to explicitly stop some of those operations explicitly. So add
an infrastructure for this.
Commands have to support this explicitly. The next commit uses this to
actually add support to a command.
If a struct as large as MPContext contains a field named "lock", it
creates the impression that it is the primary lock for MPContext. This
is wrong, the lock just protects a single field.
Pretty trivial, since commands can be async now, and the common code
even provides convenience like running commands on a worker thread.
The only ugly thing is that mp_add_external_file() needs an extra flag
for locking. This is because there's still some code which calls this
synchronously from the main thread, and unlocking the core makes no
sense there.