This makes m_config_set_option_raw() the function that is always called
on the lowest level (as leaf function for all other functions).
To do this, m_config_parse_option() has to do something special to deal
with "impure" options like --vf-add, which work on the previous option
value, instead of fully replacing it. m_config_set_option_raw() itself
always completely replaced the previous value.
This meant "cannot be used as per-file option" (wrt. playlist items).
Doesn't make too much sense anymore, especially given how obscure
per-file options are.
This _actually_ does what commit 8716c2e8 promised, and gives a slight
performance improvement for client API users which make a lot of
requests (like reading properties).
The main issue was that mp_dispatch_lock() (which client.c uses to get
exclusive access to the core) still called the wakeup callback, which
made mp_dispatch_queue_process() exit. So the playloop got executed
again, and since it does a lot of stuff, performance could be reduced.
If --blend-subtitles=yes is given, vo_opengl will call osd_draw()
multiple times, once for subtitles, and once for OSD. This meant that
the want_redraw flag was reset before the OSD was rendered, which in
turn meant that update_osd() was never called. It seems like removing
the per-OSD object want_redraw wasn't such a good idea. Fix it by
reintroducing such a flag for OSDTYPE_OSD only.
Also, the want_redraw flag is now unused, so kill it.
Another regression caused by commit 9c9cf125. Fixes#3535.
If we were waiting, and then exiting due to timeout, we still have to
recheck the condition protected by the condition variable/mutex in order
to get back to a consistent state. In this case, the queue was locked
with mp_dispatch_lock(), and mp_dispatch_queue_process() got to return
without waiting for unlock.
Also caused commit 8716c2e8. Probably an argument for replacing the
dispatch queue by a simple mutex.
Negative height is used to signal a flipped framebuffer. There's
absolutely no reason to pass this down to overlay_adjust(), and only
requires implementers to deal with an additional special-case.
Instead of using input_ctx for waiting, use the dispatch queue directly.
One big change is that the dispatch queue will just process commands
that come in (e.g. from client API) without returning. This should
reduce unnecessary playloop excutions (which is good since the playloop
got a bit fat from rechecking a lot of conditions every iteration).
Since this doesn't force a new playloop iteration on every access, this
has to be enforced manually in some cases.
Normal input (via terminal or VO window) still wakes up the playloop
every time, though that's not too important. It makes testing this
harder, though. If there are missing wakeup calls, it will be noticed
only when using the client API in some form.
At this point we could probably use a normal lock instead of the
dispatch queue stuff.
They're useless, and I have no idea what they're actually supposed to do
(wrt. pending input processing changes).
Also remove their implicit uses from the IPC handlers.
This does 3 kinds of changes:
- change sleeptime=x to mp_set_timeout()
- change sleeptime=0 to mp_wakeup_core() calls (to be more explicit)
- change commands etc. to call mp_wakeup_core() if they do changes that
require the playloop to be rerun
This is preparation for the following changes. The goal is to process
client API requests without having to rerun the playloop every time. As
of this commit, the changes should not change behavior. In particular,
the playloop is still implicitly woken up on every command.
Currently, calling mp_input_wakeup() will wake up the core thread (also
called the playloop). This seems odd, but currently the core indeed
calls mp_input_wait() when it has nothing more to do. It's done this way
because MPlayer used input_ctx as central "mainloop".
This is probably going to change. Remove direct calls to this function,
and replace it with mp_wakeup_core() calls. ao and vo are changed to use
opaque callbacks and not use input_ctx for this purpose. Other code
already uses opaque callbacks, or has legitimate reasons to use
input_ctx directly (such as sending actual user input).
This could in theory lead to missed updates if subtitles were switched
or external OSD overlays (via overlay-add) were updated. While the
change IDs of each of those were consistent, switching between two
separate OSD sources is not, and we have to explicitly trigger a change.
Regression since commit 9c9cf125. The new code is actually better,
because we do exactly what is needed, and don't just mess with the
update ID for libass-based OSD.
'cuda-gl' isn't right - you can turn this on without any GL and
get some non-zero benefit (with the cuda-copy hwaccel). So
'cuda-hwaccel' seems more consistent with everything else.
When playing audio-only, and changing the audio output device, playback
froze until the next time the playback core happened to wakeup (like
moving the mouse, or OSD redrawing). This is probably because of the
awful statemachine in fill_audio_out_buffers() - just make it recreate
the AO directly instead.
Remove the per-part force_redraw flags, and instead make the difference
between flagging dirty state and returning it to the player frontend
more explicit. The big issue is that 1. the OSD needs to know the dirty
state, and it should be cleared strictly when it is re-rendered
(force_redraw flag), and 2. the player core needs to be notified once,
and the notification must be reset (want_redraw flag).
The call in loadfile.c is replaced by making osd_set_sub() set the
change flag. Increasing the change flag on dirty state (the force_redraw
check in render_object()) should not be needed, because OSD part
renderers set it correctly (at least now).
Doing this just because someone pointed this out.
This also lets you just do "mpv --hwdec file.mkv", with the minor caveat
that the legacy syntax "--hwdec val" or "-hwdec val" (without "=") does
not work as expected anymore.
The previous commit merely copied the profile string to a file (plus
changing how RPI-specific defaults are initialized), now make some
changes on top of it. In particular, remove the --input-lirc option,
which was removed a long time ago, but forgotten from the libmpv
profile.
Move the embedded string with the builtin profiles to a separate
builtin.conf file. This makes it easier to read and edit, and you can
also check it for errors with --include=etc/builtin.conf. (Normally
errors are hidden intentionally, because there's no way to output error
messages this early, and because some options might not be present on
all platforms or with all configurations.)
Just wow. This function is implemented in ipc-win.c, and was surely be
meant to be called. But it wasn't called. This could in theory cause
crashes during exit if IPC clients were active.
Untested whether it really works.
This workaround prevented that libmpv users could accidentally crash
when the SIGPIPE signal was triggered by FFmpeg's OpenSSL/GnuTLS usage.
But it also modifies the global signal handler state, so remove it now
that this workaround is not required anymore.
This happened to break because the texture unit wasn't reset to 0, which
some code expects. The OSD code in particular set the OSD texture on the
wrong texture unit, with the result that OSD/OSC was not visible.
A minor cleanup that makes the code simpler, and guarantees that we
cleanup the GL state properly at any point.
We do this by reusing the uniform caching, and assigning each sampler
uniform its own texture unit by incrementing a counter. This has various
subtle consequences for the GL driver, which hopefully don't matter. For
example, it will bind fewer textures at a time, but also rebind them
more often.
For some reason we keep TEXUNIT_VIDEO_NUM, because it limits the number
of hook passes that can be bound at the same time.
OSD rendering is an exception: we do many passes with the same shader,
and rebinding the texture each pass. For now, this is handled in an
unclean way, and we make the shader cache reserve texture unit 0 for the
OSD texture. At a later point, we should allocate that one dynamically
too, and just pass the texture unit to the OSD rendering code. Right now
I feel like vo_rpi.c (may it rot in hell) is in the way.
The caller now has to call gl_sc_reset(), and _after_ rendering. This
way we can unset OpenGL state that was setup for rendering. This affects
the shader program, for example. The next commit uses this to
automatically manage texture units via the shader cache.
vo_rpi.c changes untested.
Stops Mesa from restricting us to OpenGL 3.0. It also tries to create
GLES 3 contexts for drivers which do not just return a higher context
when requesting GLES 2.
I don't know whether this code is a good or bad idea. A not-so-good
aspect is that we don't check for EGL 1.5 (or 1.4 extensions) for some
of the more advanced context attributes. But EGL implementations should
be able to tolerate it and return an error, and then we'd use the
fallback.
This used to be shared, but since vo_rpi is going to be removed,
untangle them. There was barely any actual code shared since the recent
changes anyway.
As a subtle change, we also stop opening libGLESv2.so explicitly in the
vo_opengl backend, and use RTLD_DEFAULT instead.
Minimal support just for testing.
Only the window surface creation (including size determination) is
really platform specific, so this could be some generic thing with
platform-specific support as some sort of sub-driver, but on the other
hand I don't see much of a need for such a thing.
While most of the fbdev usage is done by the EGL driver, using this
fbdev ioctl is apparently the only way to get the display resolution.
Add a function to egl_helpers.c for creating an EGL context and make
context_x11egl.c use it. This is meant to be generic, and should work
with other windowing APIs as well. The other EGL-using code in mpv can
be switched to it.
Edit the 0.21.0 section: remove the redundant vo_opengl items, move some
up. Move the additions (which are less important and which aren't
documented completely anyway) below the incompatible
changes/deprecations.
The wrong enum got copied here, so it was essentially using the transfer
characteristics as the primaries (instead of the primaries), which
accidentally worked fine most of the time (since the two usually
coincided), but broke on weird/mistagged files.
Fixes missing subtitle tracks if the first entry didn't have any.
Previously it just checked for the first entry in the playlist for
requested languages and if that entry happened to not have subtitles
they also wouldn't show up for the other entries.
It will skip languages if the first entry with subs has less or
different languages than the others.
Unrelated to http_dash_segments.
The consequence of this was that e.g. hardware decoding with VAAPI-EGL
could sometimes not work if the compiler didn't support C11. (Although I
found this one on RPI, which also uses this mechanism.)
If the shader fails to compile, and assertion could trigger in
gl_sc_gen_shader_and_reset() due to the code trying to recreate the
shader every time, and re-appending the uniforms every time. Just reset
the uniform array to fix this.
Some disturbed GL drivers might not return anything for glGetShaderiv()
if the GL state got "lost", so initialize variables just for additional
robustness.