Apparently, this is always _really_ monotonic, despite what the Linux
manpages say. So this should be much better than gettimeofday(). (At
times there were kernel bugs which broke the monotonic property.)
From the perspective of the player, time can still be discontinuous
(you could just stop the process with ^Z), but at least it's guaranteed
to be monotonic without further hacks required.
Also note that clock_gettime() returns the time in nanoseconds. We want
microseconds only, because that's the unit we chose internally. Another
problem is that nanoseconds can wrap pretty quickly (less than 300 years
in 63 bits), so it's just better to use microseconds. The devision won't
make the code that much slower (compilers can avoid a real division).
Note: this expects that the system provides clock_gettime() as well as
CLOCK_MONOTONIC. Both are optional according to POSIX. The only system
I know which doesn't have these, OSX, has seperate timer code anyway,
but I still don't know whether more obscure (yet supported) platforms
have a problem with this, so I'm playing safely. But this still expects
that CLOCK_MONOTONIC always works at runtime if it's defined.
This is probably "safer". Without it, we will play 1 sample, because the
logic was written in a way to decode 1 sample if audio is paused. 1
sample usually will initialize the audio PTS, but not play any real
audio. Also see previous commit.
In ancient times, this actually used 1 byte (instead of 1 sample), so
clearly no sample was written, unless the audio was 8-bit mono.
Remove the ao_buffer_playable_samples field. This contained the number
of samples that fill_audio_out_buffers() wanted to write to the AO (i.e.
this data was supposed to be played at some point), but ao_play()
rejected it due to partial fill.
This could happen with many AOs, notably those which align all written
data to an internal period size (often called "outburst" in the AO
code), and the accepted number of samples is rounded down to period
boundaries. The left-over samples at the end were still kept in
mpctx->ao_buffer, and had to be played later.
The reason ao_buffer_playable_samples had to exist was to make sure that
at EOF, the correct number of left-over samples was played (and not
possibly other data in the buffer that had to be sliced off due to
endpts in fill_audio_out_buffers()). (You'd think you could just slice
the entire buffer, but I suspect this wasn't done because the end time
could actually change due to A/V sync changes. Maybe that was the reason
it's so complicated.)
Some commits ago, ao.c gained internal buffering, and ao_play() will
never return partial writes - as long as you don't try to write more
samples than ao_get_space() reports. This is always the case. The only
exception is filling the audio buffers while paused. In this case, we
decode and play only 1 sample in order to initialize decoding (e.g. on
seeking). Actually playing this 1 sample is in fact a bug, but even of
the AO doesn't have period size alignment, you won't notice it. In
summary, this means we can safely remove the code.
Until now, this was always conflated with uninit. This was ugly, and
also many AOs emulated this manually (or just ignored it). Make draining
an explicit operation, so AOs which support it can provide it, and for
all others generic code will emulate it.
For ao_wasapi, we keep it simple and basically disable the internal
draining implementation (maybe it should be restored later).
Tested on Linux only.
Same deal as with the previous commit. We don't lose any functionality,
except for waiting "properly" on audio end, instead of waiting using the
delay estimate.
This removes the ringbuffer management from the code, and uses the
generic code added with the previous commit. The result should be
pretty much the same.
The "estimate" sub-option goes away. This estimation is now always
active. The new code for delay estimation is slightly different, and
follows the claim of the jack framework that callbacks are timed
exactly.
This has 2 goals:
- Ensure that AOs have always enough data, even if the device buffers
are very small.
- Reduce complexity in some AOs, which do their own buffering.
One disadvantage is that performance is slightly reduced due to more
copying.
Implementation-wise, we don't change ao.c much, and instead "redirect"
the driver's callback to an API wrapper in push.c.
Additionally, we add code for dealing with AOs that have a pull API.
These AOs usually do their own buffering (jack, coreaudio, portaudio),
and adding a thread is basically a waste. The code in pull.c manages
a ringbuffer, and allows callback-based AOs to read data directly.
Since the AO will run in a thread, and there's lots of shared state with
encoding, we have to add locking.
One case this doesn't handle correctly are the encode_lavc_available()
calls in ao_lavc.c and vo_lavc.c. They don't do much (and usually only
to protect against doing --ao=lavc with normal playback), and changing
it would be a bit messy. So just leave them.
We want to move the AO to its own thread. There's no technical reason
for making the ao struct opaque to do this. But it helps us sleep at
night, because we can control access to shared state better.
This field will be moved out of the ao struct. The encoding code was
basically using an invalid way of accessing this field.
Since the AO will be moved into its own thread too and will do its own
buffering, the AO and the playback core might not even agree which
sample a PTS timestamp belongs to. Add some extrapolation code to handle
this case.
Use QueryPerformanceCounter to improve the accuracy of
IAudioClock::GetPosition.
While this is mainly for "realtime correctness" (usually the delay is a
single sample or less), there are cases where IAudioClock::GetPosition
takes a long time to return from its call (though the documentation doesn't
define what a "long time" is), so correcting its value might be important in
case the documented possible delay happens.
The lack of device latency made get_delay report latencies shorter than
they should; on systems with fast enough drivers, the delay is not
perceptible, but high enough invisible delays would cause desyncs.
I'm not yet completely sure whether this is 100% accurate, there are
some issues involved when repeatedly pausing+unpausing (the delay might
jump around by several dozen miliseconds), but seeking seems to be
working correctly now.
The player didn't quit when the end of a file was reached. The reason
for this is that jack reported a constant audio delay even when all
audio was done playing. Whether that was recognized as EOF by the player
depended whether the exact value was higher or lower than the player's
threshhold for what it considers no more audio.
get_delay() should return amount of time it takes until the last sample
written to the audio buffer reaches the speaker. Therefore, we have to
track the estimated time when the last sample is done, and subtract it
from the calculated latency. Basically, the latency is the only amount
of time left in the delay, and it should go towards 0 as audio reaches
ths speakers.
I'm not sure if this is correct, but at least it solves the problem. One
suspicious thing is that we use system time to estimate the end of the
audio time. Maybe using jack_frame_time() would be more correct. But
apart from this, there doesn't seem to be a better way to handle this.
For example, consider the case when audio initialization fails. Then the
audio track is deselected. Before this commit, this would have been
equivalent to the user disabling audio. This is bad when multiple files
are played at once (the next file would have audio disabled, even if it
works), or if playback resume is used (if e.g. audio output failed to
initialize, then audio would be disabled when resuming, even if the
system's audio driver was fixed).
Setting string options to strings over the m_option fallback (i.e.
M_PROPERTY_SET_STRING is called if the option type is CONF_TYPE_STRING)
failed. This was because m_option_parse() returns 0. 0 still means
success, but the property code tried to be clever, and considered 0 not
a success in order to disallow setting flags to an emtpy string (which
in turn is allowed, because the command line allows flag options without
parameters).
Fix this by removing the overly clever code.
This could happen when e.g. using the "set" command on options/title (a
string option), and also was a problem for the client API.
Closes#610.
The OSC used significant CPU time while the player was paused. It turned
out that the "tick" event sent during pause is the problem. The OSC
accesses the player core when receiving a tick event, which in turn will
cause the core to send another tick event, leading to infinite feedback.
Fix this by sending an idle tick only every 500ms. This is not very
proper, but the idea behind the tick event isn't very clean to begin
with (and the OSC should use timers instead).
The playloop usually waits in select(), using a timeout needed for
refilling audio and video buffers. This means the client API needs
a separate mechanism to interrupt the select() call. This mechanism
exists, but I forgot to use it. This commit fixes it.
If it works, this will make the client API react faster, epsecially
in audio-only mode. If video is enabled, the reaction time is capped
to 50ms (or somewhat faster if the framerate is >20 fps), because
the playloop stops reacting to anything in order to render and time
the next video frame. (This will be fixed later by moving the VO
to its own thread.)
--ass-style-override=force now attempts to override the 'Default' style.
May or may not work. In some situations it will work, but also mess up
seemingly unrelated things like signs typeset with ASS.
luaL_loadstring(), which was used until now, uses the start of the Lua
code itself as chunk name. Since the chunk name shows up even with
runtime errors triggered by e.g. Lua code loaded from user scripts, this
looks a but ugly. Switch to luaL_loadbuffer(), which is almost the same
as luaL_loadstring(), but allows setting a chunk name.
Will be helpful to track down strange wait times and such issues, as
well when you have develop something timing related. (Then you may print
timestamps in your debug output, and the --msgtime timestamps will help
giving context.)