1
mirror of https://github.com/mpv-player/mpv synced 2024-11-14 22:48:35 +01:00
mpv/video/out/vo.h

361 lines
12 KiB
C
Raw Normal View History

/*
* Copyright (C) Aaron Holtzman - Aug 1999
*
* Strongly modified, most parts rewritten: A'rpi/ESP-team - 2000-2001
* (C) MPlayer developers
*
* This file is part of mpv.
*
* mpv is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* mpv is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with mpv. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef MPLAYER_VIDEO_OUT_H
#define MPLAYER_VIDEO_OUT_H
#include <inttypes.h>
#include <stdbool.h>
#include "video/img_format.h"
#include "common/common.h"
#include "options/options.h"
// VO needs to redraw
#define VO_EVENT_EXPOSE 1
// VO needs to update state to a new window size
#define VO_EVENT_RESIZE 2
// The ICC profile needs to be reloaded
#define VO_EVENT_ICC_PROFILE_CHANGED 4
// Some other window state changed (position, window state, fps)
#define VO_EVENT_WIN_STATE 8
// The ambient light conditions changed and need to be reloaded
#define VO_EVENT_AMBIENT_LIGHTING_CHANGED 16
// Special mechanism for making resizing with Cocoa react faster
#define VO_EVENT_LIVE_RESIZING 32
// Set of events the player core may be interested in.
#define VO_EVENTS_USER (VO_EVENT_RESIZE | VO_EVENT_WIN_STATE)
enum mp_voctrl {
/* signal a device reset seek */
VOCTRL_RESET = 1,
/* Handle input and redraw events, called by vo_check_events() */
VOCTRL_CHECK_EVENTS,
/* signal a device pause */
VOCTRL_PAUSE,
/* start/resume playback */
VOCTRL_RESUME,
VOCTRL_GET_PANSCAN,
VOCTRL_SET_PANSCAN,
VOCTRL_SET_EQUALIZER, // struct voctrl_set_equalizer_args*
VOCTRL_GET_EQUALIZER, // struct voctrl_get_equalizer_args*
video: add vaapi decode and output support This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
2013-08-09 14:01:30 +02:00
/* for hardware decoding */
VOCTRL_GET_HWDEC_INFO, // struct mp_hwdec_info**
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 23:02:08 +02:00
VOCTRL_LOAD_HWDEC_API, // private to vo_opengl
// Redraw the image previously passed to draw_image() (basically, repeat
// the previous draw_image call). If this is handled, the OSD should also
// be updated and redrawn. Optional; emulated if not available.
VOCTRL_REDRAW_FRAME,
VOCTRL_FULLSCREEN,
VOCTRL_ONTOP,
VOCTRL_BORDER,
VOCTRL_ALL_WORKSPACES,
VOCTRL_UPDATE_WINDOW_TITLE, // char*
VOCTRL_SET_CURSOR_VISIBILITY, // bool*
VOCTRL_KILL_SCREENSAVER,
VOCTRL_RESTORE_SCREENSAVER,
VOCTRL_SET_DEINTERLACE,
VOCTRL_GET_DEINTERLACE,
// Return or set window size (not-fullscreen mode only - if fullscreened,
// these must access the not-fullscreened window size only).
VOCTRL_GET_UNFS_WINDOW_SIZE, // int[2] (w/h)
VOCTRL_SET_UNFS_WINDOW_SIZE, // int[2] (w/h)
VOCTRL_GET_WIN_STATE, // int* (VO_WIN_STATE_* flags)
// char *** (NULL terminated array compatible with CONF_TYPE_STRING_LIST)
// names for displays the window is on
VOCTRL_GET_DISPLAY_NAMES,
// Retrieve window contents. (Normal screenshots use vo_get_current_frame().)
VOCTRL_SCREENSHOT_WIN, // struct mp_image**
VOCTRL_SET_COMMAND_LINE, // char**
VOCTRL_GET_ICC_PROFILE, // bstr*
VOCTRL_GET_AMBIENT_LUX, // int*
VOCTRL_GET_DISPLAY_FPS, // double*
VOCTRL_GET_RECENT_FLIP_TIME, // int64_t* (using mp_time_us())
VOCTRL_GET_PREF_DEINT, // int*
};
// VOCTRL_SET_EQUALIZER
struct voctrl_set_equalizer_args {
const char *name;
int value;
};
// VOCTRL_GET_EQUALIZER
struct voctrl_get_equalizer_args {
const char *name;
int *valueptr;
};
// VOCTRL_GET_WIN_STATE
#define VO_WIN_STATE_MINIMIZED 1
#define VO_TRUE true
#define VO_FALSE false
#define VO_ERROR -1
#define VO_NOTAVAIL -2
#define VO_NOTIMPL -3
#define VOFLAG_HIDDEN 0x10 //< Use to create a hidden window
#define VOFLAG_GLES 0x20 // Hint to prefer GLES2 if possible
#define VOFLAG_GL_DEBUG 0x40 // Hint to request debug OpenGL context
#define VOFLAG_ALPHA 0x80 // Hint to request alpha framebuffer
// VO does handle mp_image_params.rotate in 90 degree steps
#define VO_CAP_ROTATE90 1
// VO does framedrop itself (vo_vdpau). Untimed/encoding VOs never drop.
#define VO_CAP_FRAMEDROP 2
#define VO_MAX_FUTURE_FRAMES 10
struct vo;
struct osd_state;
core/VO: Allow VO drivers to add/modify frames Add interfaces to allow VO drivers to add or remove frames from the video stream and to alter timestamps. Currently this functionality only works with in correct-pts mode. Use the new functionality in vo_vdpau to properly support frame-adding deinterlace modes. Frames added by the VDPAU deinterlacing code are now properly timed. Before every second frame was always shown immediately (probably next monitor refresh) after the previous one, even if you were watching things in slow motion, and framestepping didn't stop at them at all. When seeking the deinterlace algorithm is no longer fed a mix of frames from old and new positions. As a side effect of the changes a problem with resize events was also fixed. Resizing calls video_to_output_surface() to render the frame at the new resolution, but before this function also changed the list of history frames, so resizing could give an image different from the original one, and also corrupt next frames due to them seeing the wrong history. Now the function has no such side effects. There are more resize-related problems though that will be fixed in a later commit. The deint_mpi[] list of reserved frames is increased from 2 to 3 entries for reasons related to the above. Having 2 entries is enough when you initially get a new frame in draw_image() because then you'll have those two entries plus the new one for a total of 3 (the code relied on the oldest mpi implicitly staying reserved for the duration of the call even after usage count was decreased). However if you want to be able to reproduce the rendering outside draw_image(), relying on the explicitly reserved list only, then it needs to store 3 entries.
2009-09-18 15:27:55 +02:00
struct mp_image;
struct mp_image_params;
struct vo_extra {
struct input_ctx *input_ctx;
struct osd_state *osd;
struct encode_lavc_context *encode_lavc_ctx;
struct mpv_opengl_cb_context *opengl_cb_context;
};
struct frame_timing {
// If > 0, realtime when frame should be shown, in mp_time_us() units.
int64_t pts;
// Realtime of estimated previous and next vsync events.
int64_t next_vsync;
int64_t prev_vsync;
// The current frame to be drawn. NULL means redraw previous frame
// (e.g. repeated frames).
// (Equivalent to the mp_image parameter of draw_image_timed, until the
// parameter is removed.)
struct mp_image *frame;
// List of future images, starting with the next one. This does not
// care about repeated frames - it simply contains the next real frames.
// vo_set_queue_params() sets how many frames this should include, though
// the actual number can be lower.
// future_frames[0] is the next frame.
// Note that this has frames only when a new real frame is pushed. Redraw
// calls or repeated frames do not include this.
// Ownership of the frames belongs to the caller.
int num_future_frames;
struct mp_image **future_frames;
};
struct vo_driver {
// Encoding functionality, which can be invoked via --o only.
bool encode;
// VO_CAP_* bits
int caps;
// Disable video timing, push frames as quickly as possible, never redraw.
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 23:02:08 +02:00
bool untimed;
const char *name;
const char *description;
/*
* returns: zero on successful initialization, non-zero on error.
*/
int (*preinit)(struct vo *vo);
/*
* Whether the given image format is supported and config() will succeed.
video: decouple internal pixel formats from FourCCs mplayer's video chain traditionally used FourCCs for pixel formats. For example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the string 'YV12' interpreted as unsigned int. Additionally, it used to encode information into the numeric values of some formats. The RGB formats had their bit depth and endian encoded into the least significant byte. Extended planar formats (420P10 etc.) had chroma shift, endian, and component bit depth encoded. (This has been removed in recent commits.) Replace the FourCC mess with a simple enum. Remove all the redundant formats like YV12/I420/IYUV. Replace some image format names by something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P. Add img_fourcc.h, which contains the old IDs for code that actually uses FourCCs. Change the way demuxers, that output raw video, identify the video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to request the rawvideo decoder, and sh_video->imgfmt specifies the pixel format. Like the previous hack, this is supposed to avoid the need for a complete codecs.cfg entry per format, or other lookup tables. (Note that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT raw video, but this is still considered better than adding a raw video decoder - even if trivial, it would be full of annoying lookup tables.) The TV code has not been tested. Some corrective changes regarding endian and other image format flags creep in.
2012-12-23 20:03:30 +01:00
* format: one of IMGFMT_*
* returns: 0 on not supported, otherwise 1
*/
int (*query_format)(struct vo *vo, int format);
/*
* Initialize or reconfigure the display driver.
* params: video parameters, like pixel format and frame size
* flags: combination of VOFLAG_ values
* returns: < 0 on error, >= 0 on success
*/
int (*reconfig)(struct vo *vo, struct mp_image_params *params, int flags);
/*
* Control interface
*/
int (*control)(struct vo *vo, uint32_t request, void *data);
2014-04-30 22:25:11 +02:00
/*
* Render the given frame to the VO's backbuffer. This operation will be
* followed by a draw_osd and a flip_page[_timed] call.
* mpi belongs to the VO; the VO must free it eventually.
*
* This also should draw the OSD.
2014-04-30 22:25:11 +02:00
*/
void (*draw_image)(struct vo *vo, struct mp_image *mpi);
core/VO: Allow VO drivers to add/modify frames Add interfaces to allow VO drivers to add or remove frames from the video stream and to alter timestamps. Currently this functionality only works with in correct-pts mode. Use the new functionality in vo_vdpau to properly support frame-adding deinterlace modes. Frames added by the VDPAU deinterlacing code are now properly timed. Before every second frame was always shown immediately (probably next monitor refresh) after the previous one, even if you were watching things in slow motion, and framestepping didn't stop at them at all. When seeking the deinterlace algorithm is no longer fed a mix of frames from old and new positions. As a side effect of the changes a problem with resize events was also fixed. Resizing calls video_to_output_surface() to render the frame at the new resolution, but before this function also changed the list of history frames, so resizing could give an image different from the original one, and also corrupt next frames due to them seeing the wrong history. Now the function has no such side effects. There are more resize-related problems though that will be fixed in a later commit. The deint_mpi[] list of reserved frames is increased from 2 to 3 entries for reasons related to the above. Having 2 entries is enough when you initially get a new frame in draw_image() because then you'll have those two entries plus the new one for a total of 3 (the code relied on the oldest mpi implicitly staying reserved for the duration of the call even after usage count was decreased). However if you want to be able to reproduce the rendering outside draw_image(), relying on the explicitly reserved list only, then it needs to store 3 entries.
2009-09-18 15:27:55 +02:00
/* Like draw image, but is called before every vsync with timing
* information
*/
void (*draw_image_timed)(struct vo *vo, struct mp_image *mpi,
struct frame_timing *t);
/*
* Blit/Flip buffer to the screen. Must be called after each frame!
*/
void (*flip_page)(struct vo *vo);
/*
* Timed version of flip_page (optional).
* pts_us is the frame presentation time, linked to mp_time_us().
* pts_us is 0 if the frame should be presented immediately.
* duration is estimated time in us until the next frame is shown.
* duration is -1 if it is unknown or unset (also: disable framedrop).
* If the VO does manual framedropping, VO_CAP_FRAMEDROP should be set.
* Returns 1 on display, or 0 if the frame was dropped.
*/
int (*flip_page_timed)(struct vo *vo, int64_t pts_us, int duration);
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 23:02:08 +02:00
/* These optional callbacks can be provided if the GUI framework used by
* the VO requires entering a message loop for receiving events, does not
* provide event_fd, and does not call vo_wakeup() from a separate thread
* when there are new events.
*
* wait_events() will wait for new events, until the timeout expires, or the
* function is interrupted. wakeup() is used to possibly interrupt the
* event loop (wakeup() itself must be thread-safe, and not call any other
* VO functions; it's the only vo_driver function with this requirement).
* wakeup() should behave like a binary semaphore; if wait_events() is not
* being called while wakeup() is, the next wait_events() call should exit
* immediately.
*/
void (*wakeup)(struct vo *vo);
int (*wait_events)(struct vo *vo, int64_t until_time_us);
/*
* Closes driver. Should restore the original state of the system.
*/
void (*uninit)(struct vo *vo);
// Size of private struct for automatic allocation (0 doesn't allocate)
int priv_size;
// If not NULL, it's copied into the newly allocated private struct.
const void *priv_defaults;
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 23:02:08 +02:00
// List of options to parse into priv struct (requires priv_size to be set)
const struct m_option *options;
};
struct vo {
const struct vo_driver *driver;
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 23:02:08 +02:00
struct mp_log *log; // Using e.g. "[vo/vdpau]" as prefix
void *priv;
struct mpv_global *global;
struct vo_x11_state *x11;
struct vo_w32_state *w32;
struct vo_cocoa_state *cocoa;
struct vo_wayland_state *wayland;
struct input_ctx *input_ctx;
struct osd_state *osd;
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 23:02:08 +02:00
struct encode_lavc_context *encode_lavc_ctx;
struct vo_internal *in;
struct mp_vo_opts *opts;
struct vo_extra extra;
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 23:02:08 +02:00
// --- The following fields are generally only changed during initialization.
int event_fd; // check_events() should be called when this has input
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 23:02:08 +02:00
bool probing;
// --- The following fields are only changed with vo_reconfig(), and can
// be accessed unsynchronized (read-only).
int config_ok; // Last config call was successful?
struct mp_image_params *params; // Configured parameters (as in vo_reconfig)
// --- The following fields can be accessed only by the VO thread, or from
// anywhere _if_ the VO thread is suspended (use vo->dispatch).
bool want_redraw; // redraw as soon as possible
2008-04-20 23:37:12 +02:00
// current window state
int dwidth;
int dheight;
float monitor_par;
};
struct mpv_global;
struct vo *init_best_video_out(struct mpv_global *global, struct vo_extra *ex);
int vo_reconfig(struct vo *vo, struct mp_image_params *p, int flags);
int vo_control(struct vo *vo, uint32_t request, void *data);
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 23:02:08 +02:00
bool vo_is_ready_for_frame(struct vo *vo, int64_t next_pts);
void vo_queue_frame(struct vo *vo, struct mp_image **images,
int64_t pts_us, int64_t duration);
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 23:02:08 +02:00
void vo_wait_frame(struct vo *vo);
bool vo_still_displaying(struct vo *vo);
bool vo_has_frame(struct vo *vo);
void vo_redraw(struct vo *vo);
bool vo_want_redraw(struct vo *vo);
core/VO: Allow VO drivers to add/modify frames Add interfaces to allow VO drivers to add or remove frames from the video stream and to alter timestamps. Currently this functionality only works with in correct-pts mode. Use the new functionality in vo_vdpau to properly support frame-adding deinterlace modes. Frames added by the VDPAU deinterlacing code are now properly timed. Before every second frame was always shown immediately (probably next monitor refresh) after the previous one, even if you were watching things in slow motion, and framestepping didn't stop at them at all. When seeking the deinterlace algorithm is no longer fed a mix of frames from old and new positions. As a side effect of the changes a problem with resize events was also fixed. Resizing calls video_to_output_surface() to render the frame at the new resolution, but before this function also changed the list of history frames, so resizing could give an image different from the original one, and also corrupt next frames due to them seeing the wrong history. Now the function has no such side effects. There are more resize-related problems though that will be fixed in a later commit. The deint_mpi[] list of reserved frames is increased from 2 to 3 entries for reasons related to the above. Having 2 entries is enough when you initially get a new frame in draw_image() because then you'll have those two entries plus the new one for a total of 3 (the code relied on the oldest mpi implicitly staying reserved for the duration of the call even after usage count was decreased). However if you want to be able to reproduce the rendering outside draw_image(), relying on the explicitly reserved list only, then it needs to store 3 entries.
2009-09-18 15:27:55 +02:00
void vo_seek_reset(struct vo *vo);
void vo_destroy(struct vo *vo);
void vo_set_paused(struct vo *vo, bool paused);
int64_t vo_get_drop_count(struct vo *vo);
void vo_increment_drop_count(struct vo *vo, int64_t n);
void vo_query_formats(struct vo *vo, uint8_t *list);
void vo_event(struct vo *vo, int event);
int vo_query_and_reset_events(struct vo *vo, int events);
struct mp_image *vo_get_current_frame(struct vo *vo);
void vo_set_queue_params(struct vo *vo, int64_t offset_us, bool vsync_timed,
int num_future_frames);
int vo_get_num_future_frames(struct vo *vo);
int64_t vo_get_vsync_interval(struct vo *vo);
double vo_get_display_fps(struct vo *vo);
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 23:02:08 +02:00
void vo_wakeup(struct vo *vo);
const char *vo_get_window_title(struct vo *vo);
struct mp_keymap {
int from;
int to;
};
int lookup_keymap_table(const struct mp_keymap *map, int key);
struct mp_osd_res;
void vo_get_src_dst_rects(struct vo *vo, struct mp_rect *out_src,
struct mp_rect *out_dst, struct mp_osd_res *out_osd);
#endif /* MPLAYER_VIDEO_OUT_H */