Commit Graph

35 Commits

Author SHA1 Message Date
Kacper Michajłow b68c742b27 demux/packet: add support for ITU T.35 metadata in Matroska 2024-04-29 01:37:02 +02:00
Thomas Weißschuh 9efce6d4ae various: drop unused #include "config.h"
Most sources don't need config.h.
The inclusion only leads to lots of unneeded recompilation if the
configuration is changed.
2023-02-20 14:21:18 +00:00
Philip Langdale 33e73f4efd demux: new packet should not point to source buffer when copying data
In `new_demux_packet_from`, we initialise a new packet and allocate a
buffer which we copy the source data into. But I was then assigning
the original source pointer as the packet's buffer, rather than keeping
the newly allocated one. Whoops.
2023-01-06 14:12:44 -08:00
Philip Langdale d628e6108e demux: actually initialise packet buffer when creating new packet
When I refactored the code here to stop using stack allocated
AVPackets, I forgot to assign `dp->buffer` after initialising the
AVPacket.

Fixes #11097
2023-01-06 14:03:36 -08:00
Philip Langdale cb15bc4324 demux: replace deprecated usage of stack allocated AVPackets
In the previous change, I replaced the callsites that used
`av_init_packet`, but there are a handful of places that use stack
allocated packets with no initialisation.

In one case, I just switched to heap allocation as it's only done once
per stream at most.

In the other case, I removed our usage of the AVPackets as a
convenience mechanism to transfer data into a heap allocated packet.
Instead, I inlined the data copying.
2022-12-24 09:55:37 -08:00
sfan5 b1bf4393a8 demux/packet: replace deprecated av_init_packet() 2022-01-10 22:56:52 +01:00
wm4 7d11eda72e Remove remains of Libav compatibility
Libav seems rather dead: no release for 2 years, no new git commits in
master for almost a year (with one exception ~6 months ago). From what I
can tell, some developers resigned themselves to the horrifying idea to
post patches to ffmpeg-devel instead, while the rest of the developers
went on to greener pastures.

Libav was a better project than FFmpeg. Unfortunately, FFmpeg won,
because it managed to keep the name and website. Libav was pushed more
and more into obscurity: while there was initially a big push for Libav,
FFmpeg just remained "in place" and visible for most people. FFmpeg was
slowly draining all manpower and energy from Libav. A big part of this
was that FFmpeg stole code from Libav (regular merges of the entire
Libav git tree), making it some sort of Frankenstein mirror of Libav,
think decaying zombie with additional legs ("features") nailed to it.
"Stealing" surely is the wrong word; I'm just aping the language that
some of the FFmpeg members used to use. All that is in the past now, I'm
probably the only person left who is annoyed by this, and with this
commit I'm putting this decade long problem finally to an end. I just
thought I'd express my annoyance about this fucking shitshow one last
time.

The most intrusive change in this commit is the resample filter, which
originally used libavresample. Since the FFmpeg developer refused to
enable libavresample by default for drama reasons, and the API was
slightly different, so the filter used some big preprocessor mess to
make it compatible to libswresample. All that falls away now. The
simplification to the build system is also significant.
2020-02-16 15:14:55 +01:00
wm4 389f1b0ef3 packet: fix theoretical UB if called on "empty" packets
In theory, a 0 size allocation could have made it memset() on a NULL
pointer (with a non-0 size, which makes it crash in addition to
theoretical UB).

This should never happen, since even packets with size 0 should have an
associated allocation, as FFmpeg currently does. But avoiding this makes
the API slightly more orthogonal and less tricky, I guess.
2019-09-19 20:37:05 +02:00
wm4 9a7a6958ca Revert "demux/packet: fix demux_packet_shorten"
This reverts commit 95636c65e7.

This change shouldn't be needed, and in fact it's wrong. The FFmpeg API
function could do anything it wants with the packet, including changing
the packet data pointer. Likewise, it's not guaranteed that the
referenced packet's fields mirror the current state of the mpv packet
struct (the AVPacket is only kept for the AVBuffer and the side data
stuff).
2019-09-19 20:37:05 +02:00
wm4 11027e99f2 packet: change memory estimation heuristics
Determining how much memory something uses is very hard, especially in
high level code (yes we call code using malloc high level). There's no
way to get an exact amount, especially since the malloc arena is shared
with the entire process anyway. So the demuxer packet cache tries to get
by with an estimate using a number of rough guesses.

It seems this wasn't quite good. In some ways, it was too optimistic, in
others it seemed to account for too much data. Try to get it closer to
what malloc and ta probably do. In particular, talloc adds some
singificant overhead (using talloc for mass-data was a mistake, and it's
even my fault). The result appears to match better with measured memory
usage. This is still extremely dependent on malloc implementation and so
on.

The effect is that you may need to adjust the demuxer cache limits to
cache as much data as it did before this commit. In any case, seems to
be better for me.
2019-09-19 20:37:05 +02:00
wm4 3cea180cc0 packet: free some unnecessary memory in disk cache case
If the disk cache is used, the AVPacket is not used anymore and is
completely deallocated when the packet is written to disk. As a minor
bug, the AVPacket allocation itself was not freed (although it wasn't a
memory leak, since talloc still automatically freed it when the entire
demux_packet was freed). For very large caches, this could easily add up
to over hundred MB, so actually free the unneeded allocation.
2019-09-19 20:37:05 +02:00
wm4 17da9071a4 demux: add a on-disk cache
Somewhat similar to the old --cache-file, except for the demuxer cache.
Instead of keeping packet data in memory, it's written to disk and read
back when needed.

The idea is to reduce main memory usage, while allowing fast seeking in
large cached network streams (especially live streams). Keeping the
packet metadata on disk would be rather hard (would use mmap or so, or
rewrite the entire demux.c packet queue handling), and since it's
relatively small, just keep it in memory.

Also for simplicity, the disk cache is append-only. If you're watching
really long livestreams, and need pruning, you're probably out of luck.
This still could be improved by trying to free unused blocks with
fallocate(), but since we're writing multiple streams in an interleaved
manner, this is slightly hard.

Some rather gross ugliness in packet.h: we want to store the file
position of the cached data somewhere, but on 32 bit architectures, we
don't have any usable 64 bit members for this, just the buf/len fields,
which add up to 64 bit - so the shitty union aliases this memory.

Error paths untested. Side data (the complicated part of trying to
serialize ffmpeg packets) untested.

Stream recording had to be adjusted. Some minor details change due to
this, but probably nothing important.

The change in attempt_range_joining() is because packets in cache
have no valid len field. It was a useful check (heuristically
finding broken cases), but not a necessary one.

Various other approaches were tried. It would be interesting to list
them and to mention the pros and cons, but I don't feel like it.
2019-09-19 20:37:05 +02:00
wm4 0f6cda4ab1 demux: mess with seek range updates and pruning
The main thing this commit does is removing demux_packet.kf_seek_pts. It
gets rid of 8 bytes per packet. Which doesn't matter, but whatever.

This field was involved with much of seek range updating and pruning,
because it tracked the canonical seek PTS (i.e. start PTS) of a packet
range. We have to deal with timestamp reordering, and assume the start
PTS is the lowest PTS across all packets (not necessarily just the first
packet). So knowing this PTS requires looping over all packets of a
range (no, the demuxer isn't going to tell us, that would be too sane).

Having this as packet field was perfectly fine. I'm just removing it
because I started hating extra packet fields recently.

Before this commit, this value was cached in the kf_seek_pts field (and
computed "incrementally" when adding packets). This commit computes the
value on demand (compute_keyframe_times()) by iterating over the placket
list. There is some similarity with the state before 10d0963d85,
where I introduced the kf_seek_pts field - maybe I'm just moving in
circles. The commit message claims something about quadratic complexity,
but if the code before that had this problem, this new commit doesn't
reintroduce it, at least. (See below.)

The pruning logic is simplified (I think?) - there is no "incremental"
cached pruning decision anymore (next_prune_target is removed), and
instead it simply prunes until the next keyframe like it's supposed to.
I think this incremental stuff was only there because of very old code
that got refactored away before. I don't even know what I was thinking
there, it just seems complex. Now the seek range is updated when a
keyframe packet is removed.

Instead of using the kf_seek_pts field, queue->seek_start is used to
determine the stream with the lowest timestamp, which should be pruned
first. This is different, but should work well. Doing the same as the
previous code would require compute_keyframe_times(), which would
introduce quadratic complexity.

On the other hand, it's fine to call compute_keyframe_times() when the
seek range is recomputed on pruning, because this is called only once
per removed keyframe packet. Effectively, this will iterate over the
packet list twice instead of once, and with some locality. The same
happens when packets are appended - it loops over the recently added
packets once again. (And not more often, which would go above linear
complexity.)

This introduces some "cleverness" with avoiding calling
update_seek_ranges() even when keyframe packets added/removed, which is
not really tightly coupled to the new code, and could have been in a
separate commit.

Removing next_prune_target achieves the same as commit b275232141,
which is hereby reverted (stale is_bof flags prevent seeking before the
current range, even if the beginning of the file was pruned). The seek
range is now strictly computed after at least one packet was removed,
and stale state should not be possible anymore.

Range joining may over-allocate the index a little. It tried hard to
avoid this before by explicitly freeing the old index before creating a
new one. Now it iterates over the old index while adding the entries to
the new one, which is simpler, but may allocate twice the memory in the
worst case. It's not going to matter for anything, though.

Seeking will be slightly slower. It needs to compute the seek PTS values
across all packets in the vicinity of the seek target. The previous code
also iterated over these packets, but now it iterates one packet range
more.

Another minor detail is that the special seeking code for SEEK_FORWARD
goes away. The seeking code will now iterate over the very last packet
range too, even if it's incomplete (i.e. packets are still being
appended to it). It's fine that it touches the incomplete range, because
the seek_end fields prevent that anything particularly incorrect can
happen. On the other hand, SEEK_FORWARD can now consider this as seek
target, which the deleted code had to do explicitly, as kf_seek_pts was
unset for incomplete packet ranges.
2019-09-19 20:37:05 +02:00
wm4 aa03ee7300 demux: redo timed metadata
The old implementation didn't work for the OGG case. Discard the old
shit code (instead of fixing it), and write new shit code. The old code
was already over a year old, so it's about time to rewrite it for no
reason anyway.

While it's true that the old code appears to be broken, the main reason
to rewrite this is to make it simpler. While the amount of code seems to
be about the same, both the concept and the actual tag handling are
simpler. The result is probably a bit more correct.

The packet struct shrinks by 8 byte. That fact that it wasted 8 bytes
per packet for a rather obscure use case was the reason I started this
at all (and when I found that OGG updates didn't work). While these 8
bytes aren't going to hurt, the packet struct was getting too bloated.
If you buffer a lot of data, these extra fields will add up. Still quite
some effort for 8 bytes. Fortunately, it's not like there are any
managers that need to be convinced whether it's worth doing. The freedom
to waste time on dumb shit.

The old implementation attached the current metadata to each packet.
When the decoder read the packet, the packet's metadata was made
current. The new implementation stores metadata as separate list, and
requires that the player frontend tells it the current playback time,
which will be used to find the currently valid metadata. In both cases,
the objective was to correctly update metadata even if a lot of data is
buffered ahead (and to update them correctly when seeking within the
demuxer cache).

The new implementation is actually slightly more correct, because it
uses the playback time for the metadata lookup. Consider if you have an
audio filter which buffers 15 seconds (unfortunately such a filter
exists), then the old code would update the current title 15 seconds too
early, while the new one does it correctly.

The new code also simplifies mixing the 3 metadata sources (global, per
stream, ICY). We assume these aren't mixed in a meaningful way. The old
code tried to be a bit more "exact". I didn't bother to look how the old
code did this, but the new code simply always "merges" with the previous
metadata, so if a newer tag removes a field, it's going to stick around
anyway.

I tried to keep it simple. Other approaches include making metadata a
special sh_stream with metadata packets. This would have been
conceptually clean, but the implementation would probably have been
unnatural (and doesn't match well with libavformat's API anyway). It
would have been nice to make the metadata updates chapter points (makes
a lot of sense for the intended use case, web radio current song
information), but I don't think it would have been a good idea to make
chapters suddenly so dynamic. (Still an idea to keep in mind; the new
code actually makes it easier to work towards this.)

You could mention how subtitles are timed metadata, and actually are
implemented as sparse packet streams in some formats. mp4 implements
chapters as special subtitle stream, AFAIK. (Ironically, this is very
not-ideal for files. It would be useful for streaming like web radio,
but mp4 is extremely bad for streaming by design for other reasons.)

bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
2019-09-19 20:37:05 +02:00
wm4 b9d351f02a Implement backwards playback
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)

(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)

How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.

The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).

Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).

The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.

Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.

E.g.:

    bool before = pts_a < pts_b;

would need to be:

    bool before = forward
        ? pts_a < pts_b
        : pts_a > pts_b;

or:

    bool before = pts_a * dir < pts_b * dir;

or if you, as it's implemented now, just do this after decoding:

    pts_a *= dir;
    pts_b *= dir;

and then in the normal timing/renderer code:

    bool before = pts_a < pts_b;

Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.

Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.

As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)

VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.

FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.
2019-09-19 20:37:04 +02:00
Tom Yan 95636c65e7 demux/packet: fix demux_packet_shorten
for the rawaudio demuxer to do the expected gapless playback
2018-09-30 12:32:03 +03:00
wm4 e7e06a47a0 demux: support for some kinds of timed metadata
This makes ICY title changes show up at approximately the correct time,
even if the demuxer buffer is huge. (It'll still be wrong if the stream
byte cache contains a meaningful amount of data.)

It should have the same effect for mid-stream metadata changes in e.g.
OGG (untested).

This is still somewhat fishy, but in parts due to ICY being fishy, and
FFmpeg's metadata change API being somewhat fishy. For example, what
happens if you seek? With FFmpeg AVFMT_EVENT_FLAG_METADATA_UPDATED and
AVSTREAM_EVENT_FLAG_METADATA_UPDATED we hope that FFmpeg will correctly
restore the correct metadata when the first packet is returned.

If you seke with ICY, we're out of luck, and some audio will be
associated with the wrong tag until we get a new title through ICY
metadata update at an essentially random point (it's mostly inherent to
ICY). Then the tags will switch back and forth, and this behavior will
stick with the data stored in the demuxer cache. Fortunately, this can
happen only if the HTTP stream is actually seekable, which it usually is
not for ICY things. Seeking doesn't even make sense with ICY, since you
can't know the exact metadata location. Basically ICY metsdata sucks.

Some complexity is due to a microoptimization: I didn't want additional
atomic accesses for each packet if no timed metadata is used. (It
probably doesn't matter at all.)
2018-04-18 01:17:42 +03:00
wm4 6d36fad83c video: make decoder wrapper a filter
Move dec_video.c to filters/f_decoder_wrapper.c. It essentially becomes
a source filter. vd.h mostly disappears, because mp_filter takes care of
the dataflow, but its remains are in struct mp_decoder_fns.

One goal is to simplify dataflow by letting the filter framework handle
it (or more accurately, using its conventions). One result is that the
decode calls disappear from video.c, because we simply connect the
decoder wrapper and the filter chain with mp_pin_connect().

Another goal is to eventually remove the code duplication between the
audio and video paths for this. This commit prepares for this by trying
to make f_decoder_wrapper.c extensible, so it can be used for audio as
well later.

Decoder framedropping changes a bit. It doesn't seem to be worse than
before, and it's an obscure feature, so I'm content with its new state.
Some special code that was apparently meant to avoid dropping too many
frames in a row is removed, though.

I'm not sure how the source code tree should be organized. For one,
video/decode/vd_lavc.c is the only file in its directory, which is a bit
annoying.
2018-01-30 03:10:27 -08:00
wm4 9e1fbffc37 demux_mkv: rewrite packet reading to avoid 1 memcpy()
This directly reads individual mkv sub-packets (block laces) into a
dedicated AVBufferRefs, which can be directly used for creating packets
without a additional copy of the packet data. This also means we switch
parsing of block header fields and lacing metadata to read directly from
the stream, instead of a memory buffer.

This could have been much easier if libavcodec didn't require padding
the packet data with zero bytes. We could just have each packet
reference a slice of the block data. But as it is, the only way to get
padding without a copy is to read the laces into individually allocated
(and padded) memory block, which required a larger rewrite.

This probably makes recovering from broken mkv files slightly worse if
the transport is unseekable. We just read, and then check if we've
overread. But I think that shouldn't be a real concern.

No actual measureable performance change. Potential for some
regressions, as this is quite intrusive, and touches weird obscure shit
like mkv lacing. Still keeping it because I like how it removes some
redundant EBML parsing functions.
2017-11-05 18:13:34 +01:00
wm4 10d0963d85 demux: improve and optimize cache pruning and seek range determination
The main purpose of this commit is avoiding any hidden O(n^2) algorithms
in the code for pruning the demuxer cache, and for determining the
seekable boundaries of the cache. The old code could loop over the whole
packet queue on every packet pruned in certain corner cases.

There are two ways how to reach the goal:
 1) commit a cardinal sin
 2) do everything incrementally

The cardinal sin is adding an extra field to demux_packet, which caches
the determined seekable range for a keyframe range. demux_packet is a
rather general data structure and thus shouldn't have any fields that
are not inherent to its use, and are only needed as an implementation
detail of code using it. But what are you gonna do, sue me?

In the future, demux.c might have its own packet struct though. Then the
other existing cardinal sin (the "next" field, from MPlayer times) could
be removed as well.

This commit also changes slightly how the seek end is determined. There
is a note on the manpage in case anyone finds the new behavior
confusing. It's somewhat cleaner and  might be needed for supporting
multiple ranges (although that's unclear).
2017-11-04 23:18:42 +01:00
wm4 a5b51f75dc demux: get rid of demux_packet.new_segment field
The new_segment field was used to track the decoder data flow handler of
timeline boundaries, which are used for ordered chapters etc. (anything
that sets demuxer_desc.load_timeline). This broke seeking with the
demuxer cache enabled. The demuxer is expected to set the new_segment
field after every seek or segment boundary switch, so the cached packets
basically contained incorrect values for this, and the decoders were not
initialized correctly.

Fix this by getting rid of the flag completely. Let the decoders instead
compare the segment information by content, which is hopefully enough.
(In theory, two segments with same information could perhaps appear in
broken-ish corner cases, or in an attempt to simulate looping, and such.
I preferred the simple solution over others, such as generating unique
and stable segment IDs.)

We still add a "segmented" field to make it explicit whether segments
are used, instead of doing something silly like testing arbitrary other
segment fields for validity.

Cached seeking with timeline stuff is still slightly broken even with
this commit: the seek logic is not aware of the overlap that segments
can have, and the timestamp clamping that needs to be performed in
theory to account for the fact that a packet might contain a frame that
is always clipped off by segment handling. This can be fixed later.
2017-10-24 19:35:55 +02:00
wm4 34676fc94d demux: fix crash with cue/ordered chapter files
If a packet uses segmentation, the codec field must be set. Copying the
codec field was forgotten as an oversight, which is why this just
crashes. This showed up only now, because demux_copy_packet() was not
used before in the main demux path until recently.

Fixes #5027.
2017-10-23 11:31:45 +02:00
wm4 d2af35aeb3 demux/packet: change license to LGPL
All contributors have agreed. In 3a43f13fce, someone who potentially
disagreed reverted a commit by someone else (restoring the original
state). This shouldn't matter for Copyright, and all of the affected
code was rewritten/removed anyway.
2017-04-21 13:34:10 +02:00
wm4 3709ce6718 demux: estimate total packet size, deprecate packet number limits
It's all explained in the DOCS changes. Although this option was always
kind of obscure and pointless. Until it is removed, the only reason for
setting it would be to raise the static default limit, so change its
default to INT_MAX so that it does nothing by default.
2017-04-14 19:19:44 +02:00
wang-bin e285e22143 Use AV_INPUT_BUFFER_PADDING_SIZE instead of deprecated one
Signed-off-by: wm4 <wm4@nowhere>
2017-02-08 16:55:16 +01:00
wm4 9c12d54afa demux_mkv: passthrough BlockAdditions for libvpx alpha
Dumb but simple thing. Requires the FFmpeg libvpx decoder wrapper, as
its native decoder doesn't support alpha.
2017-01-31 14:48:10 +01:00
wm4 b14fac9afa build: replace some FFmpeg API checks with version checks
The FFmpeg versions we support all have the APIs we were checking for.
Only Libav missed them. Simplify this by explicitly checking for FFmpeg
in the code, instead of trying to detect the presence of the API.
2017-01-24 08:11:42 +01:00
wm4 0af5335383 Rewrite ordered chapters and timeline stuff
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.

The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.

The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)

To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
  concatenating files with completely different codecs for the sake
  of EDL (which also uses the timeline infrastructure). A "lighter"
  approach would try to make use of decoder mechanism to update e.g.
  the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
  normal playback core mechanisms like hr-seek, but now the playback
  core doesn't need to care about these things.

These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)

There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.

Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.
2016-02-15 21:04:07 +01:00
wm4 1f2a370a03 demux_mkv: refactor packet parsing
Makes it somewhat more uniform, and breaks up the awfully deep nesting.

This implicitly changes multiple small details, rather than only moving
code around. In particular, this computes the packet fields first and
parses them afterwards, which is needed for the next commit.
2015-02-05 21:52:07 +01:00
wm4 c54f0adacd demux: unbreak build with Libav
....
2014-11-03 22:30:07 +01:00
wm4 4e87ac8231 demux_mkv: implement audio skipping/trimming
This mechanism was introduced for Opus, and allows correct skipping of
"preroll" data, as well as discarding trailing audio if the file's
length isn't a multiple of the audio frame size.

Not sure how to handle seeking. I don't understand the purpose of the
SeekPreRoll element.

This was tested with correctness_trimming_nobeeps.opus, remuxed to mka
with mkvmerge v7.2.0. It seems to be correct, although the reported file
duration is incorrect (maybe a mkvmerge issue).
2014-11-03 20:20:28 +01:00
wm4 caaeb15318 demux: gracefully handle packet allocation failures
Now the packet allocation functions can fail.
2014-09-16 18:11:00 +02:00
wm4 758f8f7bd4 demux: always use AVPacket
This is a simplification, because it lets us use the AVPacket
functions, instead of handling the details manually.

It also allows the libavcodec rawvideo decoder to use reference
counting, so it doesn't have to memcpy() the full image data. The change
in av_common.c enables this.

This change is somewhat risky, because we rely on the following AVPacket
implementation details and assumptions:
- av_packet_ref() doesn't access the input padding, and just copies the
  data. By the API, AVPacket is always padded, and we violate this. The
  lavc implementation would have to go out of its way to make this a
  real problem, though.
- We hope that the way we make the AVPacket refcountable in av_common.c
  is actually supported API-usage. It's hard to tell whether it is.

Of course we still use our own "old" demux_packet struct, just so that
libav* API usage is somewhat isolated.
2014-08-25 00:46:26 +02:00
wm4 acd60736ef Remove stream_pts stuff
This was used by DVD/BD, but its usage was removed with one of the
previous commits.
2014-07-06 19:05:59 +02:00
wm4 a97256c1d5 demux: move packet functions to a separate source file 2014-07-05 17:07:14 +02:00