1
mirror of https://github.com/mpv-player/mpv synced 2024-11-14 22:48:35 +01:00
Commit Graph

163 Commits

Author SHA1 Message Date
wm4
8751a0e261 video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)

Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.

Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)

The TV code has not been tested.

Some corrective changes regarding endian and other image format flags
creep in.
2013-01-13 20:04:11 +01:00
wm4
0c5311f17c video: cleanup: replace old mp_image function names
mp_image_alloc() also changes argument order compared to alloc_mpi().
The format now comes first, then width/height.
2013-01-13 20:04:11 +01:00
wm4
b7cacf9165 video/out: replace VFCAP_TIMER with vo->untimed, fix vo_image and vo_lavc
VFCAP_TIMER disables any additional waiting done by mpv in the
playloop. Remove VFCAP_TIMER, but re-use the idea for vo_image and
vo_lavc.

This means --untimed doesn't have to be passed when using --vo=image.
2013-01-13 20:04:10 +01:00
wm4
c54fc507da video/filter: change filter API, use refcounting, remove filter DR
Change the entire filter API to use reference counted images instead
of vf_get_image().

Remove filter "direct rendering". This was useful for vf_expand and (in
rare cases) vf_sub: DR allowed these filters to pass a cropped image to
the filters before them. Then, on filtering, the image was "uncropped",
so that black bars could be added around the image without copying. This
means that in some cases, vf_expand will be slower (-vf gradfun,expand
for example).

Note that another form of DR used for in-place filters has been replaced
by simpler logic. Instead of trying to do DR, filters can check if the
image is writeable (with mp_image_is_writeable()), and do true in-place
if that's the case. This affects filters like vf_gradfun and vf_sub.

Everything has to support strides now. If something doesn't, making a
copy of the image data is required.
2013-01-13 20:04:10 +01:00
wm4
2d8fb838d7 video: make vdpau hardware decoding not use DR code path
vdpau hardware decoding used the DR (direct rendering) path to let the
decoder query a surface from the VO. Special-case the HW decoding path
instead, to make it separate from DR.
2013-01-13 17:39:32 +01:00
wm4
23ab098969 video: remove slice based filtering and video output
Slices allowed filtering or drawing video in horizontal bands or
blocks. This allowed working on the video in smaller units. In theory,
this could bring a performance win by lowering cache pressure, as you
didn't have to keep the whole video frame in cache while filtering,
only the slice.

In practice, the slice code path was barely used for the following
reasons:
- Multithreaded decoding with ffmpeg didn't use slices. The ffmpeg
  slice callback was disabled, because it can be called from another
  thread, and the mplayer video chain is not thread-safe.
- There was nothing that would turn "full" images into appropriate
  slices, so slices were rarely used.
- Most filters didn't actually support slices.

On the other hand, supporting slices lead to code duplication and more
complex code in general. I made some experiments and didn't find any
actual measurable performance improvements when using slices. Even
ffmpeg removed slices based filtering from libavfilter in favor of
simpler code.

The most broken thing about the slices code path is that slices can't
be queued, like it is done for images in vo.c.
2013-01-13 17:39:31 +01:00
wm4
cfa1f9e082 video: make vdpau hardware decoding not use slices code path
For some reason, libavcodec abuses the slices rendering code path for
hardware decoding: in that case, the only purpose of the draw callback
is to pass a vdpau video surface object to video output. (It is unclear
to me why this had to use the slices code, instead of just returning an
AVFrame with the required vdpau state.)

Make this code separate within mpv, so that the internal slices code
path is not used for hardware decoding. Pass the vdpau state with
VOCTRL_HWDEC_DECODER_RENDER instead.

Remove the mencoder specific VOCTRLs.
2013-01-13 17:39:31 +01:00
wm4
58d3469fd6 video/out: replace VOCTRL_QUERY_FORMAT with vo_driver.query_format 2013-01-13 17:39:31 +01:00
wm4
191bcbd1f2 video/out: make draw_image mandatory, remove VOCTRL_DRAW_IMAGE
Remove VOCTRL_DRAW_IMAGE and always set vo_driver.draw_image in VOs.
Make draw_image mandatory: change some VOs (like vo_x11) to support it,
and remove the image-to-slices fallback in vf_vo.

Remove vo_driver.is_new. This member indicated whether draw_image is
supported unconditionally, which is now always the case.

draw_image_pts is a hack until the video filter chain is changed to
include the PTS as field in mp_image. Then vo_vdpau and vo_lavc will
be changed to use draw_image.
2013-01-13 17:39:31 +01:00
Rudolf Polzer
7d0a20954f core: make WAKEUP_PERIOD overridable by the vo
This is better than having just the operating system type decide the
wakeup period, as e.g. when compiling for Win32/cygwin, a wakeup period
of 0.5 would work perfectly fine.

Instead, the default wakeup period is now only decided by availability
of a working select() system call (which is the case on cygwin but not
mingw and MSVC) AND a vo that can provide an event file descriptor or a
similar hack (vo_corevideo). vos that cannot do either need polling for
event handling and now can set the wakeup period to 0.02 in the vo code.
2012-12-19 12:58:52 +01:00
wm4
53ee9aa6ae options, vo_x11: remove -zoom option, make it default
The -zoom option enabled scaling with vo_x11. Remove the -zoom option,
and make its behavior default. Since vo_x11 has to use libswscale for
colorspace conversion anyway, which doesn't do actual extra scaling when
vo_x11 is run in windowed mode, there should be no speed difference with
this change.

The code removed from vf_scale attempted to scale the video to d_width/
d_height, which matters for anamorphic video and the --xy option only.
vo_x11 can handle these natively. The only case for which the removed
vf_scale code could matter is encoding with vo_lavc, but since that
didn't set VOFLAG_SWSCALE, nothing actually changes.
2012-11-16 21:21:14 +01:00
wm4
4873b32c59 Rename directories, move files (step 2 of 2)
Finish renaming directories and moving files. Adjust all include
statements to make the previous commit compile.

The two commits are separate, because git is bad at tracking renames
and content changes at the same time.

Also take this as an opportunity to remove the separation between
"common" and "mplayer" sources in the Makefile. ("common" used to be
shared between mplayer and mencoder.)
2012-11-12 20:08:18 +01:00
wm4
d4bdd0473d Rename directories, move files (step 1 of 2) (does not compile)
Tis drops the silly lib prefixes, and attempts to organize the tree in
a more logical way. Make the top-level directory less cluttered as
well.

Renames the following directories:
    libaf -> audio/filter
    libao2 -> audio/out
    libvo -> video/out
    libmpdemux -> demux

Split libmpcodecs:
    vf* -> video/filter
    vd*, dec_video.* -> video/decode
    mp_image*, img_format*, ... -> video/
    ad*, dec_audio.* -> audio/decode

libaf/format.* is moved to audio/ - this is similar to how mp_image.*
is located in video/.

Move most top-level .c/.h files to core. (talloc.c/.h is left on top-
level, because it's external.) Park some of the more annoying files
in compat/. Some of these are relicts from the time mplayer used
ffmpeg internals.

sub/ is not split, because it's too much of a mess (subtitle code is
mixed with OSD display and rendering).

Maybe the organization of core is not ideal: it mixes playback core
(like mplayer.c) and utility helpers (like bstr.c/h). Should the need
arise, the playback core will be moved somewhere else, while core
contains all helper and common code.
2012-11-12 20:06:14 +01:00