Until now, this was for AC3 only. For PCM, we used AudioUnit in
ao_coreaudio, and the only reason ao_coreaudio_exclusive exists
is that there is no other way to passthrough AC3.
PCM support is actually rather simple. The most complicated
issue is that modern OS X versions actually do not support
copying through the data; instead everything must go through
float. So we have to deal with virtual and physical format
being different, which causes some complications.
This possibly also doesn't support some other things correctly.
For one, if the device allows non-interleaved output only, we
will probably fail. (I couldn't test it, so I don't even know
what is required. Supporting it would probably be rather
simple, and we already do it with AudioUnit.)
Mapping of spdif formats was imperfect. Since the first format on the
list is somehow AAC, it was returned first, which is confusing, because
CoreAudio calls all spdif formats AC3. Since the spdif formats have some
rather arbitrary, reverse mapping the formats didn"t actually work
either. Fix by explicitly ignoring these when spdif is used.
Also, don't forget to set the samplerate in ca_asbd_to_mpformat(), or it
will work only in some cases.
This is basically a hack for drivers which prevent the mpv DXVA2 decoder
glue from working if OpenGL is in fullscreen mode.
Since it doesn't add any "hard" new API to the client API, some of the
code would be required for a true zero-copy hw decoding pipeline, and
sine it isn't too much code after all, this is probably acceptable.
For the sake of removing the separate stream/demuxer loading code.
This could probably be reimplemented in some other way, but I have no
DVB hardware for testing. The most preferred way would be making DVB to
not quit, and just rerun the stream selection.
The final goal is making opening the demuxer and opening the stream the
same operation.
Stream dumping is a rather uninteresting feature, but has a small
number of vocal users, and it's easy to keep.
When seeking to a different position, and seeking takes long, the OSD
might get redrawn. This means that the VO will receive a request to
redraw an old frame using whatever the previous PTS was. This breaks the
interpolation logic: the old frame will be added to the queue, and then
the next frames (with lower PTS if you seeked backwards) are not drawn
as the logic assumes they're past frames.
Fix this by using the non-interpolation code path when redrawing after a
seek reset, and no "real" frame has been drawn yet.
It's a recent regression caused by the redrawing code simplification.
The old code simply sent a VOCTRL for redrawing the frame, and the VO
had to deal with retaining the old frame on its own.
This is a hack as in there's probably a better solution.
Fixes#2097.
Less code, and avoids a black flash on start.
In theory it could happen that we map the window, and then don't have a
frame to draw - but mapping the window is done in the exact moment we
have a new frame to display.
This flags stuff tried to be too clever - if there are overlapping flags
(e.g. exclusive or combined flags), the one matching with most bits has
to be chosen.
This fixes logging of the seek command. E.g. "relative" and "absolute"
overlap to make them exclusive, but "relative" was always printed as it
happened to match first.
This is not the most theoretically perfect solution, ideally we could
check to see if the frame in question has already been rendered
somewhere in the queue and then avoid re-rendering it, at the cost of a
few extra lines of code. But I don't think the performance trade-off is
dramatic enough here.
draw_image_timed is renamed to draw_frame. struct frame_timing is
renamed to vo_frame. flip_page_timed is merged into draw_frame (the
additional parameters are part of struct vo_frame). draw_frame also
deprecates VOCTRL_REDRAW_FRAME, and replaces it with a method that
works for both VOs which can cache the current frame, and VOs which
need to redraw it anyway.
This is preparation to making the interpolation and (work in progress)
display sync code saner.
Lots of other refactoring, and also some simplifications.
This should make interpolation work much better in general, although
there still might be some side effects for unusual framerates (eg. 35 Hz
or 48 Hz). Most of the common framerates are tested and working fine.
(24 Hz, 30 Hz, 60 Hz)
The new code doesn't have support for oversample yet, so it's been
removed (and will most likely be reimplemented in a cleaner way if
there's enough demand). I would recommend using something like robidoux
or mitchell instead of oversample, though - they're much
smoother for the common cases.
For now, this is trivial (and actually redundant). The future display
sync code will make better use of it. The main point is that the new
internal API pretty much makes this transparent to the vo_opengl
interpolation code.
Now the VO can request a number of future frames with the last parameter
of vo_set_queue_params(). This will be helpful to fix the interpolation
code.
Note that the first frame (after playback start or seeking) will usually
not have any future frames (to make seeking fast). Near the end of the
file, the number of future frames will become lower as well.
Until now, it only used the hash from the previous configure run,
instead of trying to get the latest hash. The "old" build system did
this correctly - we just have to use the existing logic in version.sh.
Since waf supports separate build dirs, extend version.sh with an
argument for setting the path of version.h.
May help with (supposedly) bad drivers, which can put the device into
some sort of broken state when trying to set a different physical
format. When the previous format is restored, it apparently recovers.
This might make the change-physical-format suboption more robust.
We can be pretty sure that AudioUnit will remix for us.
Before this commit, we usually upmixed to stereo, because the
stereo and multichannel layouts were the only whitelisted ones.
Again. With the old OpenGL interop dropped, this probably works better
than vaapi-copy now. Last time we defaulted to vaapi-copy, because the
OpenGL interop could swap U/V planes and other stupid crap. We'll see.
Work around that FFmpeg doesn't distinguish between surface and cropped
size. The decoder always aligns the surface size to something
"convenient" (e.g. 16 for h264), and to get to the correct cropped size,
the output image's width/height is reduced. Using the cropped size
instead of the real surface size breaks the libva API in certain cases,
so we simply store and use the original size in our per-surface struct.
(If data is cropped on the left/top borders, hw decoding will simply
display these - FFmpeg doesn't let us do better.)
In theory, this code path avoids a copy. In practice, it never seems
to get enabled at all. But it does have potential for weird bugs or
performance issues (like being mapped from non-cacheable memory),
so kill it.