Originally, the header wasn't supposed to contain random compatibility
stuff, but now all that is printed with -v. Add a hack to skip it and
to reduce the noise.
This could lead to quite visible artifacts when using an appropriate ICC
and float FBOs. The float FBOs allow storing out of range values, and my
guess is that the rest of the precessing chain elevated these out of
range values, resulting in artifacts.
OPT_STRING_VALIDATE actually did nothing. This made -vo opengl crash or
misbehave when passing an invalid value for the lscale, cscale or 3dlut-
size (the only users for this option type).
The code added with this commit was either blatantly forgotten with the
commit introducing this option type, or somehow lost.
Internally, stream_dvd.c returned DEMUXER_TYPE_MPEG_PS, and the same
value was hardcoded to enforced usage of demux_lavf in demux.c. But
"-demuxer mpegps" basically did the same, so that switch was broken
for this format. Undo this and don't request a demuxer in stream_dvd.c.
demux_lavf.c is (probably) good enough to probe correctly with DVD.
Otherwise, we'd actually have to do something completely different to
force the libavformat demuxer.
Autohide the menubar and/or dock only if they are present in the screen the
player is going to go fullscreen into. I thought the GUI would handle this for
me when I switched 0057aa476 but lack of hardware to test made me embarass
myself yet again.
I reimplemented this feature with nicer code and behaviour. The code checks
separately wether to hide menubar and dock separatly, while the old code used
a single check possibly hiding stuff without need.
Added the key checks as a some category additions to NSScreen for readability.
This takes an approach similar to the wayland OpenGL backend. VOFLAG_HIDDEN
flag semantics doesn't mean "hide the window" but is simply ever used only to
do detection of available OpenGL extensions. On OSX it's possibile to
accomplish this task just by creating the OpenGL context without attaching
it to a drawable.
Using `enterFullScreenMode:withOptions:` with a screen handle than the current
screen doesn't hide the current window in the current screen. This is a bug in
Cocoa (preparing an isolated test case and sending the rdar later).
To work around this, manually hide/show the window that the toolkit should
hide/show for us.
This bug was the result of crappy position detection in the previous code
combined with the commits moving autohide delay out of the cocoa backend and
into the core.
The hit detection was improved and now takes also account of interactions with
the Dock and Menubar. Moreover VOCTRL_SET_CURSOR_VISIBILITY now has an effect
only if the mouse position matches with this improved hit detection. This means
that both interaction with the Dock and Menubar are considered as well as
moving the mouse inside another screen.
This removes a bit of ugly code and bookeeping which is never bad. `drawRect`
needs to guard against different window instances since in fullscreen the view
is wrapped in a fullscreen window provided by the toolkit (a instance of
NSFullScreenWindow to be precise).
The event handling was moved to the view so that it can still get all the
events when in the fullscreen window. Ideally these should be moved to
some NSResponder subclass within macosx_application and made available even
when no window is present. I refrained from this because "small steps".
At this point 10.6 is pretty old and we don't want to supporting old platforms.
I'm killing all the 10.6 compatibility code before doing more refactorings.
Next commits will also use newer Objective-C syntax such as literals and
@autoreleasepool.
The new wavpack packet format (see previous commit) doesn't work with
older libavcodec versions, so disable the new code in this case.
The version numbers are only approximate, since the libavcodec version
wasn't bumped with the wavpack change, but it's close enough.
Libav introduced a silent API breakage by changing what wavpack packets
the libavcodec decoder accepts. Originally the libavcodec codec accepted
Matroska-style wavpack packets. Libav commit 9b6f47c removed this
capability from the libavcodec code, and added code to libavformat's
Matroska demuxer to "rearrange" wavpack packets. Since demux_mkv still
sent Matroska-style packets, playback failed.
Fix this by "rearranging" packets in demux_mkv as well by copying
libavformat's code. (The best kind of fix.)
Tested with [CCCP]_Mega_Lossless_Audio_Test.mkv, as well as with a
sample generated by mkvmerge.
0 is invalid. The intention of the code turning off any additional
alignment, so we need 1.
Change a comment: obviously we don't try to set alignment parameters
etc.to handle stride correctly, and instead do everything by row.
This probes and prints the depth of some texture formats with the help
of a FBO. By default it tests the format used for scaling, as well as
the format used for dithering and the 3D LUT (if any of these are
enabled).
The output is visible only with -v. Some representative values are
probed, and the difference of input and output value is printed as hex-
float. Hex-floats are used because they make the implied precision more
obvious. Originally I wanted to do some more sophisticated guessing of
the implied depth/precision for more user-friendly reporting, but then
I decided that printing raw data is better for debugging, especially if
things go wrong.
This does not try to disable any functionality and does not print any
warnings if the depth is lower than what it should be.
This might be better with dumb shader compilers, which won't vectorize
this to a single vector-division, assuming the hardware does have such
an instruction. Affects "bicubic_fast" scale mode only.
The internal texture format GL_RED is typically 8 bit, which is clearly
not good enough for the new dither matrix. The idea was to use a float
texture format, but this was somehow "forgotten". Use GL_R16, since
16 bit textures are more robust, and provide more precision for the
same memory usage.
Change how the offset for centering the dither matrix is applied. This
is needed for making it possible to round up values to the target depth.
Before this commit, this changed the output even if the input was exact
and input and output depth were the same, which is not really what you
want. Now it doesn't do that anymore.
The core deselected all streams on initialization, and then selected the
streams it actually wanted. This was no problem for
demux_mkv/demux_lavf, but old demuxers (like demux_asf) could lose some
packets. The problem is that these demuxers can buffer some data on
initialization, which then is flushed on track switching. Fix this by
explicitly avoiding deselecting a wanted stream.
Most of these are rather questionable, the rest you rarely need to set
manually. You still can set all of them with -lavdopts-o (because
libavcodec has AVOptions for them).
Playing something with "mpv f1.mkv f2.mkv --gapless-audio --volume=20"
caused the volume to be reset when playing a new file. Normally, the
volume should not be reset (unless explicitly requested with per-file
options), and without either --gapless-audio or --volume it works as
expected.
The underlying problem is that volume was saved only when the AO was
uninitialized, and also the volume was always set when starting a file.
Fix this by saving the volume when playback ends, and when the audio
is reinitialized. To make sure the volume is never restored twice or
saved in the wrong situation, introduce INITIALIZED_VOL.
Also note that this volume saving and restoring only happens if the
--volume option is used. mixer.c does its own bookkeeping of volume.
The main reason for this is that the volume option could be reset by
per-file options (see manpage), and mixer.c doesn't know anything
about this stuff. This is probably dumb, and maybe some things could
be simplified. But for now this will work.
When AAC is streamed over HTTP, using libavformat defaults is
pathetically slow. One solution for that is skipping probing and using
the mimetype to identify that it's AAC instead. This is what we did
before this commit (and ffmpeg does it too, but their logic is too
"inaccessible" for mpv).
This is still pretty fragile though. Make it a bit more robust by
requiring minimal probing. A probescore of 25 is reached after feeding
2 KB to libavformat (instead of > 500 KB for the normal probescore), so
use that. This is done only when streaming AAC from HTTP to reduce the
possibility of weird breakages for other formats.
Also reduce analyzeduration. The default analyzeduration will make
libavformat read lots of data, which makes playback start slow. So we
set analyzeduration to a low value. On the other hand, doing that for
other formats is risky, because there are unspecified effects with
certain "strange" formats (like transport streams). So we do this only
if we're streaming AAC from HTTP as well.
tl;dr libavformat is shit for media players
This can control whether demux_lavf should use the HTTP mime type to
determine the format, instead of probing the data with the libavformat
API. Do this to allow easier debugging in case the mimetype is
incorrect. (This is done only for AAC streams right now.)
In commit 0e07189, I made the status line always print a newline,
instead of cutting the output at 80 columns (or if stderr is a terminal,
whatever width the terminal reports). This is better in the case the
output goes into a log file or a pipe.
This caused problems for people who want to pipe raw video to mpv, so
change it again. (Not sure why they won't use FIFOs instead.)
Now output untrimmed lines if the slave mode flag is set, which makes
sense to do, too. The current slave mode is still on life support,
though.
This fixes a bug that caused the application to never leave it's frontmost
position.
The idea is stolen from @donmelton who used it in MPlayerShell. Thanks!
This is basically a "do not use" label. We don't remove them yet,
because we still support FFmpeg releases where we can not use
libavfilter for various reasons. Also, Libav causes pain as usual
due to the lack of ported mplayer filters in its codebase, so not
all filters will be available there.
There's no point duplicating all the text that is already in the man
pages, and synchronizing them is a pain. Place a link to the github
generated pages instead.
Unfortunately, the anchor '#vo-opengl' does not work. Maybe github's
rst converter just sucks, as the actually generated HTML contains
links using that anchor too, but does not generate the anchor itself.
Too bad.
If the image is not writeable, the image actually has to be copied
beforehand. This was overlooked when converting the video chain to
reference counted images.
Fix a double free issue. This was overlooked when vf.c was changed to
free filter priv data automatically.
Tests with demux_mkv show that the speed doesn't change (or actually,
it seems to be faster after this change). In any case, there is not
the slightest reason why these should be inline. Functions for which
this will (probably) actually matter, like stream_read_char, are
still left inline.
This was tested with demux_mkv's indexing. For broken files without
index, demux_mkv creates an on-the-fly index. If you seek to a later
part of the file, all data has to be read and parsed until the wanted
position is found. This means demux_mkv will do mostly I/O, calling
stream_read_char() and stream_read(). This should be the most I/O
intensive non-deprecated part of mpv that uses the stream interface.
(demux_lavf has its own buffering.)
Use a different algorithm to generate the dithering matrix. This
looks much better than the previous ordered dither matrix with its
cross-hatch artifacts.
The matrix generation algorithm as well as its implementation was
contributed by Wessel Dankers aka Fruit. The code in dither.c is
his implementation, reformatted and with static global variables
removed by me.
The new matrix is uploaded as float texture - before this commit, it
was a normal integer fixed point matrix. This means dithering will
be disabled on systems without float textures.
The size of the dithering matrix can be configured, as the matrix is
generated at runtime. The generation of the matrix can take rather
long, and is already unacceptable with size 8. The default is at 6,
which takes about 100 ms on a Core2 Duo system with dither.c compiled
at -O2, which I consider just about acceptable.
The old ordered dithering is still available and can be selected by
putting the dither=ordered sub-option. The ordered dither matrix
generation code was moved to dither.c. This function was originally
written by Uoti Urpala.