Use QueryPerformanceCounter to improve the accuracy of
IAudioClock::GetPosition.
While this is mainly for "realtime correctness" (usually the delay is a
single sample or less), there are cases where IAudioClock::GetPosition
takes a long time to return from its call (though the documentation doesn't
define what a "long time" is), so correcting its value might be important in
case the documented possible delay happens.
The lack of device latency made get_delay report latencies shorter than
they should; on systems with fast enough drivers, the delay is not
perceptible, but high enough invisible delays would cause desyncs.
I'm not yet completely sure whether this is 100% accurate, there are
some issues involved when repeatedly pausing+unpausing (the delay might
jump around by several dozen miliseconds), but seeking seems to be
working correctly now.
The player didn't quit when the end of a file was reached. The reason
for this is that jack reported a constant audio delay even when all
audio was done playing. Whether that was recognized as EOF by the player
depended whether the exact value was higher or lower than the player's
threshhold for what it considers no more audio.
get_delay() should return amount of time it takes until the last sample
written to the audio buffer reaches the speaker. Therefore, we have to
track the estimated time when the last sample is done, and subtract it
from the calculated latency. Basically, the latency is the only amount
of time left in the delay, and it should go towards 0 as audio reaches
ths speakers.
I'm not sure if this is correct, but at least it solves the problem. One
suspicious thing is that we use system time to estimate the end of the
audio time. Maybe using jack_frame_time() would be more correct. But
apart from this, there doesn't seem to be a better way to handle this.
For example, consider the case when audio initialization fails. Then the
audio track is deselected. Before this commit, this would have been
equivalent to the user disabling audio. This is bad when multiple files
are played at once (the next file would have audio disabled, even if it
works), or if playback resume is used (if e.g. audio output failed to
initialize, then audio would be disabled when resuming, even if the
system's audio driver was fixed).
Setting string options to strings over the m_option fallback (i.e.
M_PROPERTY_SET_STRING is called if the option type is CONF_TYPE_STRING)
failed. This was because m_option_parse() returns 0. 0 still means
success, but the property code tried to be clever, and considered 0 not
a success in order to disallow setting flags to an emtpy string (which
in turn is allowed, because the command line allows flag options without
parameters).
Fix this by removing the overly clever code.
This could happen when e.g. using the "set" command on options/title (a
string option), and also was a problem for the client API.
Closes#610.
The OSC used significant CPU time while the player was paused. It turned
out that the "tick" event sent during pause is the problem. The OSC
accesses the player core when receiving a tick event, which in turn will
cause the core to send another tick event, leading to infinite feedback.
Fix this by sending an idle tick only every 500ms. This is not very
proper, but the idea behind the tick event isn't very clean to begin
with (and the OSC should use timers instead).
The playloop usually waits in select(), using a timeout needed for
refilling audio and video buffers. This means the client API needs
a separate mechanism to interrupt the select() call. This mechanism
exists, but I forgot to use it. This commit fixes it.
If it works, this will make the client API react faster, epsecially
in audio-only mode. If video is enabled, the reaction time is capped
to 50ms (or somewhat faster if the framerate is >20 fps), because
the playloop stops reacting to anything in order to render and time
the next video frame. (This will be fixed later by moving the VO
to its own thread.)
--ass-style-override=force now attempts to override the 'Default' style.
May or may not work. In some situations it will work, but also mess up
seemingly unrelated things like signs typeset with ASS.
luaL_loadstring(), which was used until now, uses the start of the Lua
code itself as chunk name. Since the chunk name shows up even with
runtime errors triggered by e.g. Lua code loaded from user scripts, this
looks a but ugly. Switch to luaL_loadbuffer(), which is almost the same
as luaL_loadstring(), but allows setting a chunk name.
Will be helpful to track down strange wait times and such issues, as
well when you have develop something timing related. (Then you may print
timestamps in your debug output, and the --msgtime timestamps will help
giving context.)
Sending an asynchronous request and then calling mpv_destroy() would
crash the player when trying to send the reply to the removed client.
Fix this by waiting until all remaining replies have been sent.
The step argument for "add volume <step>" was ignored until now. Fix it.
There is one problem: by defualt, "add volume" should use the value set
with --volstep. This value is 3 by default. Since the default volue for
the step argument is always 1 (and we don't really want to make the
generic code more complicated by introducing custom step sizes), we
simply multiply the step argument with --volstep to keep it compatible.
The --volstep option should probably be just removed in the future.
The value range is 0-100, so fractional values don't make much sense.
But the underlying data type is probably float to avoid getting "stuck"
when doing small volume increments. So step this around and pretend it's
an integer just on display.
The previous version of the gamma suboption was pretty useless. It could
be used to disable delayed gamma enabling, which is a mechanism to avoid
having to adjust gamma in the shader by default.
Repurpose the suboption and allow setting an exact gamma value with it.
You can already override gamma with the --gamma option as well as the
gamma input property, but these use a weird curve to create the
impression of a linear perceived brightness change when changing the
value. This suboption now allows setting an exact gamma value.
This used to be absolute colorimetric, but relative colorimetric is a
saner default due to the arguments presented in issue #595.
A short summary: In general it doesn't affect much because our eyes
adapt to the white point either way, but if running in windowed mode it
would make the whites seem inconsistent/tinted. For fullscreen
projection it's also undesirable since it reduces the dynamic range
without much benefit (again, since our eyes adapt either way) and it
also breaks calibration against ambient lighting.
This shouldn't change much, since most profile types that aren't 3DLUTs
aren't capable of either of those transforms, and most displays are
calibrated against D65 (same as BT.709 source) either way.
Lua doesn't distinguish between arrays and maps on the language level;
there are just tables. Use metatables to mark these tables with their
actual types. In particular, it allows distinguishing empty arrays from
empty tables.
When passing a very large timeout to mpthread_cond_timed_wait(), the
calculations could overflow, setting tv_sec to a negative value, and
making the pthread_cond_timed_wait() call return immediately. This
accidentally made Lua support poll and burn CPU for no reason.
The existing overflow check was ineffective on 32 bit systems. tv_sec is
usually a long, so adding INT_MAX to it will usually not overflow on 64
bit systems, but on 32 bit systems it's guaranteed to overflow. Simply
fix by clamping against a relatively high value. This will work until 1
week before the UNIX time wraps around in 32 bits.