Not sure why this was so roundabout; probably to keep spam down if the
user's OpenGL drivers are crap (but then just don't enable extended
features), or because the "Disabling..." text was so repetitious.
But there doesn't seem to be a good reason after all. Also, this could
already overflow the fixed size disabled[] array. Just print the
messages directly.
Apparently the standard handles can be set to bogus values on XP. Use
GetFileType to check whether they refer to an actual file/pipe/etc. The
logic used by is_valid_handle() is now pretty similar to what the CRT
uses to check for valid stdio handles.
If mpv is started from Explorer or the Start Menu, it will have no
console and no standard IO handles. In this case, it's fairly safe to
enable the pseudo-gui profile.
Previously, mpv.exe used the --terminal option to decide whether to
attach to the parent process's console, which made it impossible to tell
whether mpv would attach to the console before the config files were
parsed. Instead, make mpv always attach to the console when launched
from the console wrapper (mpv.com) and never attach otherwise. This will
be useful for the next commit, which will use the presence of the
console to decide whether to use the pseudo-gui profile.
This change should also be an improvement in behavior. The old code
would attach to the parent process's console, regardless of whether it
was mpv.com or some other program like cmd.exe. This could be confusing,
since mpv.exe is marked as a Windows GUI program and shouldn't write
text to its parent process's console when launched directly. (See #768.)
Visual Studio does something similar with its devenv.com wrapper.
devenv.exe only attaches to the console when launched from devenv.com.
Add a platform-specific entry-point for Windows. This will allow some
platform-specific initialization to be added without the need for ugly
ifdeffery in main.c.
As an immediate advantage, mpv can now use a unicode entry-point and
convert the command line arguments to UTF-8 before passing them to
mpv_main, so osdep_preinit can be simplified a little bit.
Because gcc (and clang) is a goddamn PITA and unnecessarily warns if
the universal initializer for structs is used (like mp_image x = {})
and the first member of the struct is also a struct, move the w/h
fields to the top.
Remove --keep-open. Switch to --idle=once. This effectively makes the
player quit after end of playback, but still shows the idle screen if it
was started with no files.
Starting the command line player with --no-terminal, the terminal was
sitll initialized. This happened because update_logging() used the
option value before the options were parsed. Fix by moving down the
initialization to before the point where it's actually needed.
They are redundant. They were used by draw_bmp.c only, and only in a
special code path that 1. used fixed image formats, and 2. had image
sized perfectly aligned to chroma boundaries (so computing the chroma
width/height is trivial).
This could help in cases where the DWM (Windows desktop compositor) adds another
layer of bufferring and therefore the SwapBuffers timing could get messed up.
Signed-off-by: wm4 <wm4@nowhere>
on my windows system this allows smoothmotion to work perfectly also in windowed
mode. There's no real right or wrong here, with the the only goal being to
always hit the next vsync. however, on cases where vsync timing is jittery (as
could happen with DWM), this patch tries to aim to the middle of the vsync cycle
to get as least affected as possible by such jitter.
adds 1 vsync interval "slack" before deciding to drop the first frame. it should
help on cases of timing jitter (sleep duration, container timestamps, compositor
vsync timing, etc). once the drop threshold has been crossed, it will keep
dropping until perfect timing alignment. this prevents crossing the drop
threshold back and forth repeatedly and therefore more resilient to frame drops
When setting options like --no-video, ytdl_hook adds the "-x" argument
to youtube-dl, so that bandwith is saved by not downloading the video on
some sites.
Increase the default queue size. This helps with "missed" frames due to
the asynchronous nature of the API. All the other VOs are synchronous,
so if rendering and displaying takes a while, the common code in vo.c
will be blocked until it can continue. But with opengl-cb, vo.c might
immediately push the next ready frame, which causes the current frame
to be dropped _if_ it wasn't rendered yet.
One could fix this by making vo.c wait a while (until the API user calls
the render function, which pulls the frame). But setting the default
queue size to 2 seems much simpler: instead of dropping the frame, it
will be pushed to the API user once the next renderer call finishes.
(This is still a bit strange, and will hopefully be cleaned up when
video scheduling is redone, but for now this appears to deliver
relatively good results.)
Prefer the builtin one again. libavcodec still uses the ASS packet
format that uses inline timestamps, so the packet timestamps are
ignored. This again leads to additional rounding of timestamps, because
the ASS storage format only has 10ms resolution (instead of 1ms
resolution like libass). This again can lead to unintentional overlaps
when converting subtitles. The internal MicroDVD converter avoids this,
because it always uses packet timestamps.
This seems to come up often. I guess '.' vs. ':' for Lua calls is
confusing, and this part of the scripting API is the only one which
requires using it.
The af_add() function has a problem: if the inserted filter returns
AF_DETACH during init, the function will have a dangling pointer. Until
now this was avoided by making sure none of the used filters actually
return AF_DETACH, but it's getting infeasible.
Solve this by requiring passing an unique label to af_add(), which is
then used instead of the pointer.
Silence the usually user-visible warning about unsupported channel maps.
This might be an ALSA bug, but ALSA will never fix this behavior anyway.
(Or maybe it's a feature.)
Log some other information that might be useful.
Only reinit filters if it's actually needed. This is also slightly
easier to understand: if you look at the code, it should now be more
obvious why a reinit is needed (hopefully).
The message log level shouldn't get to decide whether something fails
or not. So replace the fatal error check on the verbose output code
path with a warning.
Avoids a confusing message printed by the vdpau code when taking a
screenshot while using software decoding (because obviously GPU readback
won't work on normal in-memory video frames).
There still might be FFmpeg demuxers which mess up if audio is disabled
(like it happened to the FLV demuxer), but these are bugs and shouldn't
happen.
This matters for png screenshots. We used to hardcode rgb24, but
libavformat's png encoder can do much more. Use the image format list
provided by the encoder, and select the best format from it (according
to the source format).
As a consequence, rgb48 (i.e. 16 bit per component) will be selected if
the source format is e.g. 10 bit yuv. This happens in accordance to
FFmpeg's avcodec_find_best_pix_fmt_of_list() function, which assumes
that 16 bit rgb should be preferred for 10 bit yuv.
This also causes it to print this message in this case:
[ffmpeg] swscaler: full chroma interpolation for destination format 'rgb48be' not yet implemented
I'm not 100% sure whether this is a problem.
Use texture-from-pixmap instead of vaapi's "native" GLX support.
Apparently the latter is unused by other projects. Possibly it's broken
due that, and Intel's inability to provide anything non-broken in
relation to video.
The new code basically uses the X11 output method on a in-memory pixmap,
and maps this pixmap as texture using standard GLX mechanisms. This
requires a lot of X11 and GLX boilerplate, so the code grows. (I don't
know why libva's GLX interop doesn't just do the same under the hood,
instead of bothering the world with their broken/unmaintained "old"
method, whatever it did. I suspect that Intel programmers are just
genuine sadists.)
This change was suggested in issue #1765.
The old GLX support is removed, as it's redundant and broken anyway.
One remaining issue is that the first vaPutSurface() call fails with an
unknown error. It returns -1, which is pretty strange, because vaapi
error codes are normally positive. It happened with the old GLX code
too, but does not happen with vo_vaapi. I couldn't find out why.
We handle picking out font attachments by mime type ourselves in a
higher level, so we really just want to use the mimetype. Also, Matroska
is currently the only code in libavformat which uses the fonts at all,
and we can drop use of the codec IDs completely.