These must be written even if there was no "final frame", e.g. due to
the player being exited with "q".
Although the issue is mostly of theoretical nature, as most audio codecs
don't need the final encoding calls with NULL data. Maybe will be more
relevant in the future.
This used to be in bytes, now it's in samples. Divide the value by 8
(assuming a typical audio format, float samples with 2 channels).
Fix some editing mistake or non-sense about the extra buffering added
(1<<x instead of x<<5).
Also sneak in a s/MPlayer/mpv/.
This changes option parsing as well as filter defaults slightly. The
default is now to encode to spdif (this is way more useful than writing
raw AC3 - what was this even useful for, other than writing broken ac3
-in-wav files?). The bitrate parameter is now always in kbps.
Apparently this was completely broken after commit 22b3f522. Basically,
this locked up immediately completely while decoding the first packet.
The reason was that the buffer calculations confused bytes and number of
samples. Also, EOF reporting was broken (wrong return code).
The special-casing of ad_mpg123 and ad_spdif (with DECODE_MAX_UNIT) is a
bit annoying, but will eventually be solved in a better way.
This could cause the bundle to recache stuff because of differences with
configuration of other software using fonconfig. The defaults OS X directories
should be added to fontconfig at build time (through configure).
Slightly simplifies memory management. This might make adding a demuxer
cache wrapper easier at a later point, because you can just copy the
complete stream header, without worrying that the wrapper will free the
individual stream header fields.
The existing code tried to remove the "extra" profile flags for h264.
FF_PROFILE_H264_INTRA doesn't matter for us at all, because it's set
only for profiles the vdpau/vaapi APIs don't support.
The FF_PROFILE_H264_CONSTRAINED flag on the other hand is added to
H264_BASELINE, except that it makes the file a real subset of H264_MAIN
and H264_HIGH. Removing that flag would select the BASELINE profile,
which appears to be rarely supported by hardware decoders. This means we
accidentally rejected perfectly hardware decodable files. Use MAIN for
it instead.
(vaapi has explicit support for CONSTRAINED_BASELINE, but it seems to be
a new thing, and is not reported as supported where I tried. So don't
bother to check it, and do the same as on vdpau.)
See github issue #204.
This used to be needed for teletext support. Teletext commit has been
removed (see commit ebaaa41f), and it appears this code is inactive.
It was just forgotten with the removal. Get rid of it completely.
Untested. (Like all changes to the TV code.)
Signed-off-by: wm4 <wm4@nowhere>
Significant modifications over the original patch by not overriding
syscalls with macros ("#define open v4l2open") for fallback, but the
other way around ("#define v4l2open open"). As consequence, the calls
have to be replaced throughout the file.
Untested, although the original patch probably was tested.
Apparently this is not portable to FreeBSD. It turns out that we
(probably) don't use any symbols defined by this header directly, so
the includes are not needed.
These use the _oldargs_ hack, which failed in combination with playback
resume. Make it work.
It would be better to port all filters to new option parsing, but that's
obviously too much work, and most filters will probably be deleted and
replaced by libavfilter in the long run.
ao_null should simulate a "perfect" AO, but framestepping behaved quite
badly with it. Framstepping usually exposes problems with AOs dropping
their buffers on pause, and that's what happened here.
Most libavcodec decoders output non-interleaved audio. Add direct
support for this, and remove the hack that repacked non-interleaved
audio back to packed audio.
Remove the minlen argument from the decoder callback. Instead of
forcing every decoder to have its own decode loop to fill the buffer
until minlen is reached, leave this to the caller. So if a decoder
doesn't return enough data, it's simply called again. (In future, I
even want to change it so that decoders don't read packets directly,
but instead the caller has to pass packets to the decoders. This fits
well with this change, because now the decoder callback typically
decodes at most one packet.)
ad_mpg123.c receives some heavy refactoring. The main problem is that
it wanted to handle format changes when there was no data in the decode
output buffer yet. This sounds reasonable, but actually it would write
data into a buffer prepared for old data, since the caller doesn't know
about the format change yet. (I.e. the best place for a format change
would be _after_ writing the last sample to the output buffer.) It's
possible that this code was not perfectly sane before this commit,
and perhaps lost one frame of data after a format change, but I didn't
confirm this. Trying to fix this, I ended up rewriting the decoding
and also the probing.
libav* is generally freaking horrible, and might do bad things if the
data pointer passed to it are not aligned. One way to be sure that the
alignment is correct is allocating all pointers using av_malloc().
It's possible that this is not needed at all, though. For now it might
be better to keep this, since the mp_audio code is intended to replace
another buffer in dec_audio.c, which is currently av_malloc() allocated.
The original reason why this uses av_malloc() is apparently because
libavcodec used to directly encode into mplayer buffers, which is not
the case anymore, and thus (probably) doesn't make sense anymore.
(The commit subject uses the word "cargo cult", after all.)
Remove the awkward planarization. It had to be done because the AC3
encoder requires planar formats, but now we support them natively.
Try to simplify buffer management with mp_audio_buffer.
Improve checking for buffer overflows and out of bound writes. In
theory, these shouldn't happen due to AC3 fixed frame sizes, but being
paranoid is better.
The format negotiation is the same, except don't confusingly copy the
input format into af->data, just to overwrite it later. af->data should
alwass contain the output format, and the existing code was just a very
misguided use of the af_test_output() helper function.
Just set af->data to the output format immediately, and modify the input
format properly.
Also, if format negotiation fails (and needs another iteration), don't
initialize the libavcodec encoder.
Before this commit, the af_instance->mul/delay values were in bytes.
Using bytes is confusing for non-interleaved audio, so switch mul to
samples, and delay to seconds. For delay, seconds are more intuitive
than bytes or samples, because it's used for the latency calculation.
We also might want to replace the delay mechanism with real PTS
tracking inside the filter chain some time in the future, and PTS
will also require time-adjustments to be done in seconds.
For most filters, we just remove the redundant mul=1 initialization.
(Setting this used to be required, but not anymore.)
Since ao_openal simulates multi-channel audio by placing a bunch of
mono-sources in 3D space, non-interleaved audio is a perfect match for
it. We just have to remove the interleaving code.
ALSA supports non-interleaved audio natively using a separate API
function for writing audio. (Though you have to tell it about this on
initialization.) ALSA doesn't have separate sample formats for this,
so just pretend to negotiate the interleaved format, and assume that
all non-interleaved formats have an interleaved companion format.
Replace the code that used a single buffer with mp_audio_buffer. This
also enables non-interleaved output operation, although it's still
disabled, and no AO supports it yet.
Implementation wise, this could be much improved, such as using a
ringbuffer that doesn't require copying data all the time. This is
why we don't use mp_audio directly instead of mp_audio_buffer.
This comes with two internal AO API changes:
1. ao_driver.play now can take non-interleaved audio. For this purpose,
the data pointer is changed to void **data, where data[0] corresponds to
the pointer in the old API. Also, the len argument as well as the return
value are now in samples, not bytes. "Sample" in this context means the
unit of the smallest possible audio frame, i.e. sample_size * channels.
2. ao_driver.get_space now returns samples instead of bytes. (Similar to
the play function.)
Change all AOs to use the new API.
The AO API as exposed to the rest of the player still uses the old API.
It's emulated in ao.c. This is purely to split the commits changing all
AOs and the commits adding actual support for outputting N-I audio.
Allocate af_instance->data in generic code before filter initialization.
Every filter needs af->data (since it contains the output
configuration), so there's no reason why every filter should allocate
and free it.
Remove RESIZE_LOCAL_BUFFER(), and replace it with mp_audio_realloc_min().
Interestingly, most code becomes simpler, because the new function takes
the size in samples, and not in bytes. There are larger change in
af_scaletempo.c and af_lavcac3enc.c, because these had copied and
modified versions of the RESIZE_LOCAL_BUFFER macro/function.
No AO can handle these, so it would be a problem if they get added
later, and non-interleaved formats get accepted erroneously. Let them
gracefully fall back to other formats.
Most AOs actually would fall back, but to an unrelated formats. This is
covered by this commit too, and if possible they should pick the
interleaved variant if a non-interleaved format is requested.
Based on earlier work by Stefano Pigozzi.
There are 2 changes:
1. Instead of mp_audio.audio, mp_audio.planes[0] must be used.
2. mp_audio.len used to contain the size of the audio in bytes. Now
mp_audio.samples must be used. (Where 1 sample is the smallest unit
of audio that covers all channels.)
Also, some filters need changes to reject non-interleaved formats
properly.
Nothing uses the non-interleaved features yet, but this is needed so
that things don't just break when doing so.