1
mirror of https://github.com/mpv-player/mpv synced 2025-01-01 04:36:24 +01:00
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@5587 b3059339-0415-0410-9bf9-f77b7e298cf2
This commit is contained in:
arpi 2002-04-13 02:09:18 +00:00
parent 3962480f56
commit ae80a63c97
6 changed files with 185 additions and 114 deletions

View File

@ -21,6 +21,7 @@ FOR THE RELEASE:
demuxer:
- fix AVI index offset base position handling ('no video stream found' bug)
- implement OpenDML index support (read & write)
- implement hardcore bruteforce avi re-sync for broken files (-forceidx)
- fix for growing avi files (movi_end pos > stream->end_pos)
- implement forward seeking in avi streams with no index
@ -30,7 +31,6 @@ demuxer:
- fix the whole syncing mechanism of Real demuxer
- implement mpeg-TS demuxer
FUTURE:
~~~~~~~

View File

@ -47,7 +47,6 @@ videocodec indeo5ds
out YV12
out YUY2
out BGR32,BGR24,BGR16,BGR15
cpuflags mmx
This is a particularly full-featured video codec. The "videocodec" keyword
identifies the fact that this is the start of a new video
@ -82,11 +81,6 @@ to output. Just like the fourcc line, there can be multiple out lines or
multiple comma-separated output formats on the same line. The output
formats should be listed in order of preference.
The "cpuflags" identifies special operating parameters that this codec
requires. For example, this video codec is known to use MMX
instructions. Currently, valid strings for this keyword include mmx, sse,
and 3dnow.
Audio Codecs
------------
Here is an example a rather full-featured audio codec block:

View File

@ -1,9 +1,27 @@
Huh. The planar YUV modes.
==========================
In general
==========
The most missunderstood thingie...
There are planar and packed modes.
- Planar mode means: you have 3 separated image, one for each component,
each image 8 bites/pixel. To get the real colored pixel, you have to
mix the components from all planes. The resolution of planes may differ!
- Packed mode means: you have all components mixed/interleaved together,
so you have small "packs" of components in a single, big image.
Let's see: (some cut'n'paste from www and maillist)
There are RGB and YUV colorspaces.
- RGB: Read, Green and Blue components. Used by analog VGA monitors.
- YUV: Luminance (Y) and Chrominance (U,V) components. Used by some
video systems, like PAL. Also most m(j)peg/dct based codecs use this.
With YUV, they used to reduce the resolution of U,V planes:
The most common YUV formats:
fourcc: bpp: IEEE: plane sizes: (w=width h=height of original image)
? 24 YUV 4:4:4 Y: w * h U,V: w * h
YUY2,UYVY 16 YUV 4:2:2 Y: w * h U,V: (w/2) * h
YV12,I420 12 YUV 4:2:0 Y: w * h U,V: (w/2) * (h/2)
YVU9 9 YUV 4:1:1 Y: w * h U,V: (w/4) * (h/4)
conversion: (some cut'n'paste from www and maillist)
RGB to YUV Conversion:
Y = (0.257 * R) + (0.504 * G) + (0.098 * B) + 16
@ -35,10 +53,14 @@ Y = luminance, the weighted average of R G B components. (0=black 255=white)
U = Cb = blue component (0=green 128=grey 255=blue)
V = Cr = red component (0=green 128=grey 255=red)
MPlayer side:
=============
Huh. The planar YUV modes.
==========================
The most missunderstood thingie...
In MPlayer, we usually have 3 pointers to the Y, U and V planes, so it
doesn't matter what is they order in memory:
doesn't matter what is the order of the planes in the memory:
for mp_image_t and libvo's draw_slice():
planes[0] = Y = luminance
planes[1] = U = Cb = blue

View File

@ -2,13 +2,18 @@ So, I'll describe how this stuff works.
The main modules:
1. streamer.c: this is the input layer, this reads the file or the VCD or
stdin. what it has to know: appropriate buffering by sector, seek, skip
functions, reading by bytes, or blocks with any size. The stream_t
structure describes the input stream, file/device.
1. stream.c: this is the input layer, this reads the input media (file, stdin,
vcd, dvd, network etc). what it has to know: appropriate buffering by
sector, seek, skip functions, reading by bytes, or blocks with any size.
The stream_t (stream.h) structure describes the input stream, file/device.
2. demuxer.c: this does the demultiplexing of the input to audio and video
channels, and their reading by buffered packages.
There is a stream cache layer (cache2.c), it's a wrapper for the stream
API. It does fork(), then emulates stream driver in the parent process,
and stream user in the child process, while proxying between them using
preallocated big memory chunk for FIFO buffer.
2. demuxer.c: this does the demultiplexing (separating) of the input to
audio, video or dvdsub channels, and their reading by buffered packages.
The demuxer.c is basically a framework, which is the same for all the
input formats, and there are parsers for each of them (mpeg-es,
mpeg-ps, avi, avi-ni, asf), these are in the demux_*.c files.
@ -16,11 +21,11 @@ The main modules:
2.a. demux_packet_t, that is DP.
Contains one chunk (avi) or packet (asf,mpg). They are stored in memory as
in chained list, cause of their different size.
in linked list, cause of their different size.
2.b. demuxer stream, that is DS.
Struct: demux_stream_t
Every channel (a/v) has one. This contains the packets for the stream
Every channel (a/v/s) has one. This contains the packets for the stream
(see 2.a). For now, there can be 3 for each demuxer :
- audio (d_audio)
- video (d_video)
@ -68,7 +73,24 @@ The main modules:
DEMUXER: Too many (%d in %d bytes) audio packets in the buffer
error shows up.
So everything is ok 'till now, I want to move them to a separate lib.
2.d. video.c: this file/function handle the reading and assembling of the
video frames. each call to video_read_frame() should read and return a
single video frame, and it's duration in seconds (float).
The implementation is splitted to 2 big parts - reading from mpeg-like
streams and reading from one-frame-per-chunk files (avi, asf, mov).
Then it calculates duration, either from fixed FPS value, or from the
PTS difference between and after reading the frame.
2.e. other utility functions: there are some usefull code there, like
AVI muxer, or mp3 header parser, but leave them for now.
So everything is ok 'till now. It can be found in libmpdemux/ library.
It should compile outside of mplayer tree, you just have to implement few
simple functions, like mp_msg() to print messages, etc.
See libmpdemux/test.c for example.
See also formats.txt, for description of common media file formats and their
implementation details in libmpdemux.
Now, go on:
@ -86,7 +108,7 @@ Now, go on:
sleep (wait until a_frame>=v_frame)
display the frame
apply A-V PTS correction to a_frame
check for keys -> pause,seek,...
handle events (keys,lirc etc) -> pause,seek,...
}
When playing (a/v), it increases the variables by the duration of the
@ -173,101 +195,25 @@ Now, go on:
Also, doesn't/badly works with some drivers.
Only used if none of the above works.
4. Codecs. They are separate libs.
For example libac3, libmpeg2, xa/*, alaw.c, opendivx/*, loader, mp3lib.
4. Codecs. Consists of libmpcodecs/* and separate files or libs,
for example liba52, libmpeg2, xa/*, alaw.c, opendivx/*, loader, mp3lib.
mplayer.c doesn't call the directly, but through the dec_audio.c and
mplayer.c doesn't call them directly, but through the dec_audio.c and
dec_video.c files, so the mplayer.c doesn't have to know anything about
the codec.
the codecs.
libmpcodecs contains wrapper for every codecs, some of them include the
codec function implementation, some calls functions from other files
included with mplayer, some calls optional external libraries.
file naming convention in libmpcodecs:
ad_*.c - audio decoder (called through dec_audio.c)
vd_*.c - video decoder (called through dec_video.c)
ve_*.c - video encoder (used by mencoder)
vf_*.c - video filter (see option -vop)
5. libvo: this displays the frame.
The constants for different pixelformats are defined in img_format.h,
their usage is mandatory.
Each vo driver _has_ to implement these:
IMPORTANT: it's mandatorial that every vo driver support the YV12 format,
and one (or both) of BGR15 and BGR24, with conversion, if needed.
If these aren't supported, not every codec will work! The mpeg codecs
can output only YV12, and the older win32 DLLs only 15 and 24bpp.
There is a fast MMX-optimized 15->16bpp converter, so it's not a
significant speed-decrease!
The BPP table, if the driver can't change bpp:
current bpp has to accept these
15 15
16 15,16
24 24
24,32 24,32
If it can change bpp (for example DGA 2, fbdev, svgalib), then if possible
we have to change to the desired bpp. If the hardware doesn't support,
we have to change to the one closest to it, and do conversion!
preinit():
init the video system (to support querying for supported formats)
THIS IS CALLED ONLY ONCE
control():
Current controls:
VOCTRL_QUERY_FORMAT - queries if a given pixelformat is supported.
return value: flags:
0x1 - supported
0x2 - supported without conversion (define 0x1 too!)
0x4 - sub/osd supported (has draw_alpha)
0x8 - hardware handles subpics
0x100 - driver/hardware handles timing (blocking)
VOCTRL_GET_IMAGE
libmpcodecs Direct Rendering interface
You need to set mpi (mp_image.h) structure, for example,
look at vo_x11, vo_sdl, vo_xv or mga_common.
VOCTRL_RESET - reset the video device
This is sent on seeking and similar and is useful if you are
using a device which prebuffers frames that need to flush them
before refilling audio/video buffers.
VOCTRL_PAUSE
VOCTRL_RESUME
VOCTRL_GUISUPPORT
return true only if driver supports co-operation with
MPlayer's GUI (not yet used by GUI)
VOCTRL_QUERY_VAA - this is used by the vidix extension
this is used by the vidix extension to fill a vo_vaa_t struct,
I do not know how this works since I'm not the author of this
config():
Set up the video system. You get the dimensions and flags.
Flags:
0x01 - fullscreen (-fs)
0x02 - mode switching (-vm)
0x04 - software scaling (-zoom)
0x08 - flipping (-flip) -- REQUIRED to support this
Also these flags you can get from vo_flags too and they're
defined as VOFLAG_* (see libvo/video_out.h)
uninit():
Uninit the whole system, this is on the same "level" as preinit.
draw_slice(): this displays YV12 pictures (3 planes, one full sized that
contains brightness (Y), and 2 quarter-sized which the colour-info
(U,V). MPEG codecs (libmpeg2, opendivx) use this. This doesn't have
to display the whole frame, only update small parts of it.
draw_frame(): this is the older interface, this displays only complete
frames, and can do only packed format (YUY2, RGB/BGR).
Win32 codecs use this (DivX, Indeo, etc).
draw_alpha(): this displays subtitles and OSD.
It's a bit tricky to use it, since it's not a part of libvo API,
but a callback-style stuff. The flip_page() has to call
vo_draw_text(), so that it passes the size of the screen and the
corresponding draw_alpha() implementation for the pixelformat
(function pointer). The vo_draw_text() checks the characters to draw,
and calls draw_alpha() for each.
As a help, osd.c contains draw_alpha for each pixelformats, use this
if possible!
flip_page(): this is called after each frame, this diplays the buffer for
real. This is 'swapbuffers' when double-buffering.
for details on this, read libvo.txt
6. libao2: this control audio playing

100
DOCS/tech/libvo.txt Normal file
View File

@ -0,0 +1,100 @@
libvo --- the library to handle video output by A'rpi, 2002.04
============================================
Note: before start on this, read colorspaces.txt !
The constants for different pixelformats are defined in img_format.h,
their usage is mandatory.
Each vo driver _has_ to implement these:
preinit():
init the video system (to support querying for supported formats)
uninit():
Uninit the whole system, this is on the same "level" as preinit.
control():
Current controls:
VOCTRL_QUERY_FORMAT - queries if a given pixelformat is supported.
It also returns various flags decsirbing the capabilities
of the driver with teh given mode. for the flags, see
file vfcaps.h !
the most important flags, every driver must properly report
these:
0x1 - supported (with or without conversion)
0x2 - supported without conversion (define 0x1 too!)
0x100 - driver/hardware handles timing (blocking)
also SET sw/hw scaling and osd support flags, and flip,
and accept_stride if you implement put_image (see vfcaps.h)
NOTE: VOCTRL_QUERY_FORMAT may be called _before_ first config()
but is always called between preinit() and uninit()
VOCTRL_GET_IMAGE
libmpcodecs Direct Rendering interface
You need to update mpi (mp_image.h) structure, for example,
look at vo_x11, vo_sdl, vo_xv or mga_common.
VOCTRL_PUT_IMAGE
replacement for the current draw_slice/draw_frame way of
passing video frames. by implementing SET_IMAGE, you'll get
image in mp_image struct instead of by calling draw_*.
unless you return VO_TRUE for VOCTRL_PUT_IMAGE call, the
old-style draw_* functils will be called!
Note: draw_slice is still mandatory, for per-slice rendering!
VOCTRL_RESET - reset the video device
This is sent on seeking and similar and is useful if you are
using a device which prebuffers frames that need to flush them
before refilling audio/video buffers.
VOCTRL_PAUSE
VOCTRL_RESUME
VOCTRL_GUISUPPORT
return true only if driver supports co-operation with
MPlayer's GUI (not yet used by GUI)
VOCTRL_QUERY_VAA - this is used by the vidix extension
this is used by the vidix extension to fill a vo_vaa_t struct,
I do not know how this works since I'm not the author of this
config():
Set up the video system. You get the dimensions and flags.
width, height: size of the source image
d_width, d_height: wanted scaled/display size (it's a hint)
Flags:
0x01 - force fullscreen (-fs)
0x02 - allow mode switching (-vm)
0x04 - allow software scaling (-zoom)
0x08 - flipping (-flip)
They're defined as VOFLAG_* (see libvo/video_out.h)
IMPORTAMT NOTE: config() may be called 0 (zero), 1 or more (2,3...)
times between preinit() and uninit() calls. You MUST handle it, and
you shouldn't crash at second config() call or at uninit() without
any config() call! To make your life easier, vo_config_count is
set to the number of previous config() call, counted from preinit().
It's set by the caller (vf_vo.c), you don't have to increase it!
So, you can check for vo_config_count>0 in uninit() when freeing
resources allocated in config() to avoid crash!
draw_slice(): this displays YV12 pictures (3 planes, one full sized that
contains brightness (Y), and 2 quarter-sized which the colour-info
(U,V). MPEG codecs (libmpeg2, opendivx) use this. This doesn't have
to display the whole frame, only update small parts of it.
draw_frame(): this is the older interface, this displays only complete
frames, and can do only packed format (YUY2, RGB/BGR).
Win32 codecs use this (DivX, Indeo, etc).
If you implement VOCTRL_PUT_IMAGE, you can left draw_frame.
draw_osd(): this displays subtitles and OSD.
It's a bit tricky to use it, since it's a callback-style stuff.
It should call vo_draw_text() with screen dimension and your
draw_alpha implementation for the pixelformat (function pointer).
The vo_draw_text() checks the characters to draw, and calls
draw_alpha() for each. As a help, osd.c contains draw_alpha for
each pixelformats, use this if possible!
NOTE: this one will be obsolete soon! But it's still usefull when
you want to do tricks, like rendering osd _after_ hardware scaling
(tdfxfb) or render subtitles under of the image (vo_mpegpes, sdl)
flip_page(): this is called after each frame, this diplays the buffer for
real. This is 'swapbuffers' when double-buffering.

View File

@ -1,3 +1,12 @@
============================================================
NOTE: libvo2 plan was abandoned, we've changed libvo1.
so, this draft is USELESS NOW, see libvo.txt
============================================================
//First Announce by Ivan Kalvachev
//Some explanations by Arpi & Pontscho