The wakeup at the end of VO frame rendering seems redundant, because
after rendering almost no state changes. The player core can queue a new
frame once frame rendering begins, and there's a separate wakeup for
this. The only thing that actually changes is in->rendering. The only
thing that seems to depend on it and can trigger a wakeup is the
vo_still_displaying() function. Change it so that it needs an explicit
call to a new API function, so we can avoid wakeups in the common case.
The vo_still_displaying() code is mostly just moved around due to
locking and for avoiding forward declarations.
Also a somewhat risky change (tasty new bugs).
This should be unnecessary, since the VO itself performs wakeups once a
new frame can be queued. The only situation I can think of where this
might be required are EOF situations (which are always strange).
If I'm wrong, there'll be fun new bugs, probably causing frame drops or
temporary stalls.
When the player core requests new frames from the filter, this is called
external/recursive filtering, which acts slightly differently from when
filters request new data internally. Mainly this is so the external user
doesn't have to call mp_filter_graph_run() just to get a frame. This
causes a number of complications, and the short version is that until
now, mp_filter_graph_run() has unnecessarily returned true in the
current common case, which made the playloop run too often for no
reason.
The problem is that when a mp_pin is read externally, updating the same
mp_pin during recursive filtering flagged external_pending when the
result was written, which made mp_filter_graph_run() return true, which
made the playloop call mp_filter_graph_run() again. This is redundant
because the caller is obviously checking the new state of the mp_pin
immediately.
The only situation in which external_pending really must be set is if
_another_ pin is changed. In theory, we could also unset it if the set
of "external" pins that are not in a signaled state becomes empty, but
we don't track that in a convenient way.
This commit removes the redundant signaling, and avoids running the
playloop an additional time for each video and audio frame (as it
actually was planned from the beginning, but duh).
If a filter receives an asynchronous wakeup during filtering, then
process newly pending filters resulting from that as well, before
returning to the user. Might possibly skip some redundant playloop
cycles.
There is an explicit comment in the code about how this shouldn't be
done, but I think it makes no sense. Filters have no business trying to
interrupt the mainloop, and mp_filter_graph_interrupt() provides a
proper mechanism to do this (though intended to be used by the filter
user, not filters).
In this case, init_buffers() was not called, and the unrelated cache
sample buffers were not initialized. It appears they are indeed
completely unrelated, so move their initialization away. Not sure what
exactly the purpose of calling init_buffers() is, maybe clearing old
data when displaying stats again. The new place for initializing the
cache sample buffers should achieve the same anyway.
Fixes: #7597
This should make it behave roughly like when switching from a file to
the next (clearing audio buffers, keeping AO, but closing AO if the
audio format seems to have changed and gapless mode is "weak").
Not necessarily useful, but harmless and may help with #7579 (untested).
Replace use of .min==1 with a proper flag. This is a good idea, because
it has nothing to do with numeric limits (also see commit 9d32d62b61
for how this can go wrong).
With this, m_option.min/max are strictly used for numeric limits.
This was optional, with the intention that normally such options require
a valid format. But there is no reason for this (at least not anymore),
and it's actually more logical to accept "no" in all situations this
option type is used. This also gets rid of the weird min field special
use.
These used ".min = MP_NOPTS_VALUE" to indicate certain exceptions. This
broke with the recent change to how min/max are handled, which made
setting min or max mean that a value range is used, thus setting max=0.
Fix this by not using magic a value in .min; replace it with a proper
flag.
Fixes: #7596
This is the proper fix for 1e7802. Turns out the solution is dead
simple: we can still set the allocator with lua_getallocf /
lua_setalloc.
This commit makes memory accounting work on luajit as well.
This is a stopgap measure. In theory we could maybe poll the memory
usage on luajit, but for now, simply reverting this part of fd3caa26
makes Lua work again. (And we can still collect cpu usage metrics)
Proper solution pending (tm)
While --input-file was removed for justified reasons, wanting to pass
down socket FDs this way is legitimate, useful, and easy to implement.
One odd thing is that
Fixes: #7592
Add an infrastructure for collecting performance-related data, use it in
some places. Add rendering of them to stats.lua.
There were two main goals: minimal impact on the normal code and normal
playback. So all these stats_* function calls either happen only during
initialization, or return immediately if no stats collection is going
on. That's why it does this lazily adding of stats entries etc. (a first
iteration made each stats entry an API thing, instead of just a single
stats_ctx, but I thought that was getting too intrusive in the "normal"
code, even if everything gets worse inside of stats.c).
You could get most of this information from various profilers (including
the extremely primitive --dump-stats thing in mpv), but this makes it
easier to see the most important information at once (at least in
theory), partially because we know best about the context of various
things.
Not very happy with this. It's all pretty primitive and dumb. At this
point I just wanted to get over with it, without necessarily having to
revisit it later, but with having my stupid statistics.
Somehow the code feels terrible. There are a lot of meh decisions in
there that could be better or worse (but mostly could be better), and it
just sucks but it's also trivial and uninteresting and does the job. I
guess I hate programming. It's so tedious and the result is always shit.
Anyway, enjoy.
I think that makes more sense.
And also remove the graph from the total cache usage, since that wasn't
very interesting. So there's still a total of 2 graphs.
That's where it comes from after all. The other property does not have
much of a reason to exist anymore, but there's no real reason to remove
it either.
Previously, EGL as provided by a pkg-config was checked for independently
in several places. The effect this had is that --disable-egl would not
actually disable EGL from the build, as this only affected the "egl" option
relied upon by egl-x11. wayland-gl and egl-drm did their own EGL checks.
By making wayland-gl and drm-egl depend on egl instead, we fix this
behaviour and can simplify egl-helpers a bit, as we can now simply
check whether egl or one of the other features providing some non-pc egl
is enabled, instead of checking every single thing that might be pulling
in egl.
Future work could make the "egl" option just be a catchall for any
EGL implementation, so that brcmegl and angle and Android can piggyback
on the egl option as well.
Ancient Linux audio output. Apparently it survived until now, because
some BSDs (but not all) had use of this. But these should work with
ao_sdl or ao_openal too (that's why these AOs exist after all). ao_oss
itself has the problem that it's virtually unmaintainable from my point
of view due to all the subtle (or non-subtle) difference. Look at the
ifdef mess and the multiple code paths (that shouldn't exist) in the
removed source code.
I wonder what this even is. I've never heard of anyone using it, and
can't find a corresponding library that actually builds with it. Good
enough to remove.
It was always marked as "experimental", and had inherent problems that
were never fixed. It was disabled by default, and I don't think anyone
is using it.
May or may not help when dealing with playlist loading in scripts. It's
supposed to help with the mean fact that loading a recursive playlist
will essentially edit the playlist behind the API user's back.
Scripts such as the OSC can be loaded and unloaded at runtime by
toggling the option that enables them. (It even works, although normally
it's only used to control initial loading.)
Unloading was racy because it used the client name; fix this.
The load-script change is an accidental feature. And probably useless.
I mostly intend this for internal purposes. Probably pretty useless for
external API users, but on the other hand trivial to expose. While it
makes a lot of sense internally, I'll probably regret exposing it.
This time is set on every seek (but when initiating it). A new seek
longer than 2 seconds after this is counted as separate seek for the
sake of the revert seek command. If a seek takes a bit longer, this
removes time from these 2 seconds. This commits resets it on the end of
the seek. (Doing anything special for seeks that take longer than 2
seconds is out of scope of this commit.)
Until now, it used only coordinates clipped to the screen for this,
which meant no negative margins were ever reported to libass. This broke
proper rendering of explicitly positioned ASS events (libass simply
could not know the real video size in this case.)
Fix this by reporting margins even if they're negative. This makes it
apparently work correctly with vo_gpu at least.
Note that I'm not really sure if anything in the rendering chain
required non-negative margins. If so, and that code implicitly assumed
it, I suppose crashes and such are possible.
The "rule" is that a fallback warning message should be shown only shown
if software decoding was used before, or in other words when either
hwdec was enabled before, but the stream suddenly falls back, or it was
attempted to enable it at runtime, and it didn't work.
The message wasn't printed the first time in the latter case, because
hwdec_notified was not set in forced software decoding mode. Fix it with
this commit. Fortunately, the logic becomes simpler.
There's no more need to call mp_lua_PITA to get the ctx, and the
autofree prototype is now enforced at the C level at compile time.
Also remove the redundant talloc_free_children at these functions
since it's now freed right after the function completes.
Also, rename auto_free_node to steal_node_allocations to be more
explicit and to avoid confusion with the autofree terminology.
Advantages of this approach:
- All the resources are released right after the function ends
regardless if it threw an error or not, without having to wait
for GC.
- Simpler code.
- Simpler lua setup which most likely uses less memory allocation and
as a result should be quicker, though it wasn't measured.
This commit adds the autofree wrapper and uses it where mp_lua_PITA
was used. It's not yet enforced at the C level, there are still
redundant talloc_free_children leftovers, and there are few more
places which could also use autofree. The next commits will address
those.
Looks like the recent change to this actually made it crash whenever
audio happened to be initialized first, due to not setting the
mux_stream field before the on_ready callback. Mess a way around this.
Also remove a stray unused variable from ao_lavc.c.
Lua changed behavior for this specific event. I considered the change
minor enough that it would not need to go through deprecation, but
someone hit it immediately and ask on the -dev channel.
It's probably better to restore the behavior. But mark it as deprecated,
since it's problematic (mismatch with the C API). Unfortunately, no
automatic warning is possible. (Or maybe it is, by playing sophisticated
Lua tricks such as setting a metatable and overriding indexing, but
let's not.)