lavu/frame: deprecate AVFrame.pkt_{pos,size}

These fields are supposed to store information about the packet the
frame was decoded from, specifically the byte offset it was stored at
and its size.

However,
- the fields are highly ad-hoc - there is no strong reason why
  specifically those (and not any other) packet properties should have a
  dedicated field in AVFrame; unlike e.g. the timestamps, there is no
  fundamental link between coded packet offset/size and decoded frames
- they only make sense for frames produced by decoding demuxed packets,
  and even then it is not always the case that the encoded data was
  stored in the file as a contiguous sequence of bytes (in order for pos
  to be well-defined)
- pkt_pos was added without much explanation, apparently to allow
  passthrough of this information through lavfi in order to handle byte
  seeking in ffplay. That is now implemented using arbitrary user data
  passthrough in AVFrame.opaque_ref.
- several filters use pkt_pos as a variable available to user-supplied
  expressions, but there seems to be no established motivation for using them.
- pkt_size was added for use in ffprobe, but that too is now handled
  without using this field. Additonally, the values of this field
  produced by libavcodec are flawed, as described in the previous
  ffprobe conversion commit.

In summary - these fields are ill-defined and insufficiently motivated,
so deprecate them.
This commit is contained in:
Anton Khirnov 2023-03-10 10:48:34 +01:00
parent 2fb3ee1787
commit 27f8c9b27b
28 changed files with 291 additions and 60 deletions

View File

@ -299,7 +299,8 @@ timestamp expressed in seconds, NAN if the input timestamp is unknown
sequential number of the input frame, starting from 0
@item pos
the position in the file of the input frame, NAN if unknown
the position in the file of the input frame, NAN if unknown; deprecated, do
not use
@item w
@item h
@ -3005,10 +3006,6 @@ depends on the filter input pad, and is usually 1/@var{sample_rate}.
@item pts_time
The presentation timestamp of the input frame in seconds.
@item pos
position of the frame in the input stream, -1 if this information in
unavailable and/or meaningless (for example in case of synthetic audio)
@item fmt
The sample format.
@ -7306,7 +7303,7 @@ number of samples consumed by the filter
@item nb_samples
number of samples in the current frame
@item pos
original frame position in the file
original frame position in the file; deprecated, do not use
@item pts
frame PTS
@item sample_rate
@ -10428,7 +10425,8 @@ pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
The number of the input frame, starting from 0.
@item pos
the position in the file of the input frame, NAN if unknown
the position in the file of the input frame, NAN if unknown; deprecated,
do not use
@item t
The timestamp expressed in seconds. It's NAN if the input timestamp is unknown.
@ -12772,7 +12770,7 @@ frame count of the input frame starting from 0
@item pos
byte position of the corresponding packet in the input file, NAN if
unspecified
unspecified; deprecated, do not use
@item r
frame rate of the input video, NAN if the input frame rate is unknown
@ -18148,7 +18146,8 @@ format. For example for the pixel format "yuv422p" @var{hsub} is 2 and
the number of input frame, starting from 0
@item pos
the position in the file of the input frame, NAN if unknown
the position in the file of the input frame, NAN if unknown; deprecated,
do not use
@item t
The timestamp, expressed in seconds. It's NAN if the input timestamp is unknown.
@ -18157,7 +18156,7 @@ The timestamp, expressed in seconds. It's NAN if the input timestamp is unknown.
This filter also supports the @ref{framesync} options.
Note that the @var{n}, @var{pos}, @var{t} variables are available only
Note that the @var{n}, @var{t} variables are available only
when evaluation is done @emph{per frame}, and will evaluate to NAN
when @option{eval} is set to @samp{init}.
@ -18312,6 +18311,7 @@ The ordinal index of the main input frame, starting from 0.
@item pos
The byte offset position in the file of the main input frame, NAN if unknown.
Deprecated, do not use.
@item t
The timestamp of the main input frame, expressed in seconds, NAN if unknown.
@ -20196,6 +20196,7 @@ seconds. Only available with @code{eval=frame}.
The position (byte offset) of the frame in the input stream, or NaN if
this information is unavailable and/or meaningless (for example in case of synthetic video).
Only available with @code{eval=frame}.
Deprecated, do not use.
@end table
@subsection Examples
@ -20528,6 +20529,7 @@ seconds. Only available with @code{eval=frame}.
The position (byte offset) of the frame in the input stream, or NaN if
this information is unavailable and/or meaningless (for example in case of synthetic video).
Only available with @code{eval=frame}.
Deprecated, do not use.
@end table
@section scale2ref
@ -21151,10 +21153,6 @@ time base units. The time base unit depends on the filter input pad.
The Presentation TimeStamp of the input frame, expressed as a number of
seconds.
@item pos
The position of the frame in the input stream, or -1 if this information is
unavailable and/or meaningless (for example in case of synthetic video).
@item fmt
The pixel format name.
@ -22235,7 +22233,8 @@ The number of the input frame, starting from 0.
The timestamp expressed in seconds. It's NAN if the input timestamp is unknown.
@item pos
the position in the file of the input frame, NAN if unknown
the position in the file of the input frame, NAN if unknown; deprecated,
do not use
@end table
@subsection Commands
@ -28910,7 +28909,7 @@ This is 1 if the filtered frame is a key-frame, 0 otherwise.
@item pos
the position in the file of the filtered frame, -1 if the information
is not available (e.g. for synthetic video)
is not available (e.g. for synthetic video); deprecated, do not use
@item scene @emph{(video only)}
value between 0 and 1 to indicate a new scene; a low value reflects a low
@ -29100,7 +29099,7 @@ constants:
@table @option
@item POS
Original position in the file of the frame, or undefined if undefined
for the current frame.
for the current frame. Deprecated, do not use.
@item PTS
The presentation timestamp in input.
@ -29248,7 +29247,7 @@ the time in seconds of the current frame
@item POS
original position in the file of the frame, or undefined if undefined
for the current frame
for the current frame; deprecated, do not use
@item PREV_INPTS
The previous input PTS.

View File

@ -1074,7 +1074,11 @@ static int set_output_frame(AVCodecContext *avctx, AVFrame *frame,
frame->pts = pkt->pts;
frame->pkt_dts = pkt->dts;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
frame->pkt_size = pkt->size;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
*got_frame = 1;

View File

@ -549,9 +549,13 @@ static inline CopyRet copy_frame(AVCodecContext *avctx,
frame->pts = pkt_pts;
frame->pkt_pos = -1;
frame->duration = 0;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
frame->pkt_pos = -1;
frame->pkt_size = -1;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
if (!priv->need_second_field) {
*got_frame = 1;

View File

@ -623,9 +623,13 @@ static int cuvid_output_frame(AVCodecContext *avctx, AVFrame *frame)
/* CUVIDs opaque reordering breaks the internal pkt logic.
* So set pkt_pts and clear all the other pkt_ fields.
*/
frame->pkt_pos = -1;
frame->duration = 0;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
frame->pkt_pos = -1;
frame->pkt_size = -1;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
frame->interlaced_frame = !parsed_frame.is_deinterlacing && !parsed_frame.dispinfo.progressive_frame;

View File

@ -139,8 +139,10 @@ static int extract_packet_props(AVCodecInternal *avci, const AVPacket *pkt)
av_packet_unref(avci->last_pkt_props);
if (pkt) {
ret = av_packet_copy_props(avci->last_pkt_props, pkt);
#if FF_API_FRAME_PKT
if (!ret)
avci->last_pkt_props->stream_index = pkt->size; // Needed for ff_decode_frame_props().
#endif
}
return ret;
}
@ -287,8 +289,12 @@ static inline int decode_simple_internal(AVCodecContext *avctx, AVFrame *frame,
if (!(codec->caps_internal & FF_CODEC_CAP_SETS_PKT_DTS))
frame->pkt_dts = pkt->dts;
if (avctx->codec->type == AVMEDIA_TYPE_VIDEO) {
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
if(!avctx->has_b_frames)
frame->pkt_pos = pkt->pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
//FIXME these should be under if(!avctx->has_b_frames)
/* get_buffer is supposed to set frame parameters */
if (!(avctx->codec->capabilities & AV_CODEC_CAP_DR1)) {
@ -460,8 +466,10 @@ FF_ENABLE_DEPRECATION_WARNINGS
pkt->pts = AV_NOPTS_VALUE;
pkt->dts = AV_NOPTS_VALUE;
if (!(codec->caps_internal & FF_CODEC_CAP_SETS_FRAME_PROPS)) {
#if FF_API_FRAME_PKT
// See extract_packet_props() comment.
avci->last_pkt_props->stream_index = avci->last_pkt_props->stream_index - consumed;
#endif
avci->last_pkt_props->pts = AV_NOPTS_VALUE;
avci->last_pkt_props->dts = AV_NOPTS_VALUE;
}
@ -1313,9 +1321,13 @@ int ff_decode_frame_props_from_pkt(const AVCodecContext *avctx,
};
frame->pts = pkt->pts;
frame->pkt_pos = pkt->pos;
frame->duration = pkt->duration;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
frame->pkt_pos = pkt->pos;
frame->pkt_size = pkt->size;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
for (int i = 0; i < FF_ARRAY_ELEMS(sd); i++) {
size_t size;
@ -1356,7 +1368,11 @@ int ff_decode_frame_props(AVCodecContext *avctx, AVFrame *frame)
int ret = ff_decode_frame_props_from_pkt(avctx, frame, pkt);
if (ret < 0)
return ret;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
frame->pkt_size = pkt->stream_index;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
}
#if FF_API_REORDERED_OPAQUE
FF_DISABLE_DEPRECATION_WARNINGS

View File

@ -79,8 +79,12 @@ static void uavs3d_output_callback(uavs3d_io_frm_t *dec_frame) {
frm->pts = dec_frame->pts;
frm->pkt_dts = dec_frame->dts;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
frm->pkt_pos = dec_frame->pkt_pos;
frm->pkt_size = dec_frame->pkt_size;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
#if FF_API_FRAME_PICTURE_NUMBER
FF_DISABLE_DEPRECATION_WARNINGS
frm->coded_picture_number = dec_frame->dtr;
@ -175,8 +179,12 @@ static int libuavs3d_decode_frame(AVCodecContext *avctx, AVFrame *frm,
uavs3d_io_frm_t *frm_dec = &h->dec_frame;
buf_end = buf + buf_size;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
frm_dec->pkt_pos = avpkt->pos;
frm_dec->pkt_size = avpkt->size;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
while (!finish) {
int bs_len;

View File

@ -343,7 +343,11 @@ static int create_subcc_packet(AVFormatContext *avctx, AVFrame *frame,
memcpy(lavfi->subcc_packet.data, sd->data, sd->size);
lavfi->subcc_packet.stream_index = stream_idx;
lavfi->subcc_packet.pts = frame->pts;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
lavfi->subcc_packet.pos = frame->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
return 0;
}
@ -450,7 +454,11 @@ static int lavfi_read_packet(AVFormatContext *avctx, AVPacket *pkt)
pkt->stream_index = stream_idx;
pkt->pts = frame->pts;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
pkt->pos = frame->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
if (st->codecpar->codec_type != AVMEDIA_TYPE_VIDEO)
av_frame_free(&frame);

View File

@ -213,12 +213,11 @@ FF_ENABLE_DEPRECATION_WARNINGS
av_channel_layout_describe(&buf->ch_layout, chlayout_str, sizeof(chlayout_str));
av_log(ctx, AV_LOG_INFO,
"n:%"PRId64" pts:%s pts_time:%s pos:%"PRId64" "
"n:%"PRId64" pts:%s pts_time:%s "
"fmt:%s channels:%d chlayout:%s rate:%d nb_samples:%d "
"checksum:%08"PRIX32" ",
inlink->frame_count_out,
av_ts2str(buf->pts), av_ts2timestr(buf->pts, &inlink->time_base),
buf->pkt_pos,
av_get_sample_fmt_name(buf->format), buf->ch_layout.nb_channels, chlayout_str,
buf->sample_rate, buf->nb_samples,
checksum);

View File

@ -48,7 +48,9 @@ static const char *const var_names[] = {
"nb_channels", ///< number of channels
"nb_consumed_samples", ///< number of samples consumed by the filter
"nb_samples", ///< number of samples in the current frame
#if FF_API_FRAME_PKT
"pos", ///< position in the file of the frame
#endif
"pts", ///< frame presentation timestamp
"sample_rate", ///< sample rate
"startpts", ///< PTS at start of stream
@ -288,7 +290,9 @@ static int config_output(AVFilterLink *outlink)
vol->var_values[VAR_N] =
vol->var_values[VAR_NB_CONSUMED_SAMPLES] =
vol->var_values[VAR_NB_SAMPLES] =
#if FF_API_FRAME_PKT
vol->var_values[VAR_POS] =
#endif
vol->var_values[VAR_PTS] =
vol->var_values[VAR_STARTPTS] =
vol->var_values[VAR_STARTT] =
@ -330,7 +334,6 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *buf)
AVFilterLink *outlink = inlink->dst->outputs[0];
int nb_samples = buf->nb_samples;
AVFrame *out_buf;
int64_t pos;
AVFrameSideData *sd = av_frame_get_side_data(buf, AV_FRAME_DATA_REPLAYGAIN);
int ret;
@ -380,8 +383,15 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *buf)
vol->var_values[VAR_T ] = TS2T(buf->pts, inlink->time_base);
vol->var_values[VAR_N ] = inlink->frame_count_out;
pos = buf->pkt_pos;
vol->var_values[VAR_POS] = pos == -1 ? NAN : pos;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
{
int64_t pos;
pos = buf->pkt_pos;
vol->var_values[VAR_POS] = pos == -1 ? NAN : pos;
}
FF_ENABLE_DEPRECATION_WARNINGS
#endif
if (vol->eval_mode == EVAL_MODE_FRAME)
set_volume(ctx);

View File

@ -272,7 +272,11 @@ static int request_frame(AVFilterLink *outlink)
memcpy(samplesref->data[0], flite->wave_samples,
nb_samples * flite->wave->num_channels * 2);
samplesref->pts = flite->pts;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
samplesref->pkt_pos = -1;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
samplesref->sample_rate = flite->wave->sample_rate;
flite->pts += nb_samples;
flite->wave_samples += nb_samples * flite->wave->num_channels;

View File

@ -487,7 +487,9 @@ static const char *const var_names[] = {
enum {
VAR_T,
VAR_N,
#if FF_API_FRAME_PKT
VAR_POS,
#endif
VAR_W,
VAR_H,
VAR_VARS_NB
@ -1464,7 +1466,11 @@ int ff_inlink_evaluate_timeline_at_frame(AVFilterLink *link, const AVFrame *fram
{
AVFilterContext *dstctx = link->dst;
int64_t pts = frame->pts;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
int64_t pos = frame->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
if (!dstctx->enable_str)
return 1;
@ -1473,7 +1479,9 @@ int ff_inlink_evaluate_timeline_at_frame(AVFilterLink *link, const AVFrame *fram
dstctx->var_values[VAR_T] = pts == AV_NOPTS_VALUE ? NAN : pts * av_q2d(link->time_base);
dstctx->var_values[VAR_W] = link->w;
dstctx->var_values[VAR_H] = link->h;
#if FF_API_FRAME_PKT
dstctx->var_values[VAR_POS] = pos == -1 ? NAN : pos;
#endif
return fabs(av_expr_eval(dstctx->enable, dstctx->var_values, NULL)) >= 0.5;
}

View File

@ -134,7 +134,9 @@ enum var_name {
VAR_PREV_SELECTED_N,
VAR_KEY,
#if FF_API_FRAME_PKT
VAR_POS,
#endif
VAR_SCENE,
@ -339,7 +341,11 @@ static void select_frame(AVFilterContext *ctx, AVFrame *frame)
select->var_values[VAR_N ] = inlink->frame_count_out;
select->var_values[VAR_PTS] = TS2D(frame->pts);
select->var_values[VAR_T ] = TS2D(frame->pts) * av_q2d(inlink->time_base);
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
select->var_values[VAR_POS] = frame->pkt_pos == -1 ? NAN : frame->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
select->var_values[VAR_KEY] = frame->key_frame;
select->var_values[VAR_CONCATDEC_SELECT] = get_concatdec_select(frame, av_rescale_q(frame->pts, inlink->time_base, AV_TIME_BASE_Q));

View File

@ -43,7 +43,9 @@
static const char *const var_names[] = {
"N", /* frame number */
"T", /* frame time in seconds */
#if FF_API_FRAME_PKT
"POS", /* original position in the file of the frame */
#endif
"PTS", /* frame pts */
"TS", /* interval start time in seconds */
"TE", /* interval end time in seconds */
@ -56,7 +58,9 @@ static const char *const var_names[] = {
enum var_name {
VAR_N,
VAR_T,
#if FF_API_FRAME_PKT
VAR_POS,
#endif
VAR_PTS,
VAR_TS,
VAR_TE,
@ -531,7 +535,11 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *ref)
double current = TS2T(ref->pts, inlink->time_base);
var_values[VAR_N] = inlink->frame_count_in;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
var_values[VAR_POS] = ref->pkt_pos == -1 ? NAN : ref->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
var_values[VAR_PTS] = TS2D(ref->pts);
var_values[VAR_T] = current;
var_values[VAR_TS] = start;

View File

@ -45,7 +45,9 @@ static const char *const var_names[] = {
"N", ///< frame / sample number (starting at zero)
"NB_CONSUMED_SAMPLES", ///< number of samples consumed by the filter (only audio)
"NB_SAMPLES", ///< number of samples in the current frame (only audio)
#if FF_API_FRAME_PKT
"POS", ///< original position in the file of the frame
#endif
"PREV_INPTS", ///< previous input PTS
"PREV_INT", ///< previous input time in seconds
"PREV_OUTPTS", ///< previous output PTS
@ -70,7 +72,9 @@ enum var_name {
VAR_N,
VAR_NB_CONSUMED_SAMPLES,
VAR_NB_SAMPLES,
#if FF_API_FRAME_PKT
VAR_POS,
#endif
VAR_PREV_INPTS,
VAR_PREV_INT,
VAR_PREV_OUTPTS,
@ -161,7 +165,11 @@ static double eval_pts(SetPTSContext *setpts, AVFilterLink *inlink, AVFrame *fra
}
setpts->var_values[VAR_PTS ] = TS2D(pts);
setpts->var_values[VAR_T ] = TS2T(pts, inlink->time_base);
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
setpts->var_values[VAR_POS ] = !frame || frame->pkt_pos == -1 ? NAN : frame->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
setpts->var_values[VAR_RTCTIME ] = av_gettime();
if (frame) {
@ -187,11 +195,10 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
frame->pts = D2TS(d);
av_log(inlink->dst, AV_LOG_TRACE,
"N:%"PRId64" PTS:%s T:%f POS:%s",
"N:%"PRId64" PTS:%s T:%f",
(int64_t)setpts->var_values[VAR_N],
d2istr(setpts->var_values[VAR_PTS]),
setpts->var_values[VAR_T],
d2istr(setpts->var_values[VAR_POS]));
setpts->var_values[VAR_T]);
switch (inlink->type) {
case AVMEDIA_TYPE_VIDEO:
av_log(inlink->dst, AV_LOG_TRACE, " INTERLACED:%"PRId64,
@ -242,10 +249,9 @@ static int activate(AVFilterContext *ctx)
if (ff_inlink_acknowledge_status(inlink, &status, &pts)) {
double d = eval_pts(setpts, inlink, NULL, pts);
av_log(ctx, AV_LOG_TRACE, "N:EOF PTS:%s T:%f POS:%s -> PTS:%s T:%f\n",
av_log(ctx, AV_LOG_TRACE, "N:EOF PTS:%s T:%f -> PTS:%s T:%f\n",
d2istr(setpts->var_values[VAR_PTS]),
setpts->var_values[VAR_T],
d2istr(setpts->var_values[VAR_POS]),
d2istr(d), TS2T(d, inlink->time_base));
ff_outlink_set_status(outlink, status, D2TS(d));
return 0;

View File

@ -50,7 +50,9 @@ static const char *const var_names[] = {
"x",
"y",
"n", ///< number of frame
#if FF_API_FRAME_PKT
"pos", ///< position in the file
#endif
"t", ///< timestamp expressed in seconds
NULL
};
@ -68,7 +70,9 @@ enum var_name {
VAR_X,
VAR_Y,
VAR_N,
#if FF_API_FRAME_PKT
VAR_POS,
#endif
VAR_T,
VAR_VARS_NB
};
@ -145,7 +149,9 @@ static int config_input(AVFilterLink *link)
s->var_values[VAR_OUT_H] = s->var_values[VAR_OH] = NAN;
s->var_values[VAR_N] = 0;
s->var_values[VAR_T] = NAN;
#if FF_API_FRAME_PKT
s->var_values[VAR_POS] = NAN;
#endif
av_image_fill_max_pixsteps(s->max_step, NULL, pix_desc);
@ -257,8 +263,12 @@ static int filter_frame(AVFilterLink *link, AVFrame *frame)
s->var_values[VAR_N] = link->frame_count_out;
s->var_values[VAR_T] = frame->pts == AV_NOPTS_VALUE ?
NAN : frame->pts * av_q2d(link->time_base);
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
s->var_values[VAR_POS] = frame->pkt_pos == -1 ?
NAN : frame->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
s->var_values[VAR_X] = av_expr_eval(s->x_pexpr, s->var_values, NULL);
s->var_values[VAR_Y] = av_expr_eval(s->y_pexpr, s->var_values, NULL);
/* It is necessary if x is expressed from y */
@ -280,8 +290,8 @@ static int filter_frame(AVFilterLink *link, AVFrame *frame)
s->y &= ~((1 << s->vsub) - 1);
}
av_log(ctx, AV_LOG_TRACE, "n:%d t:%f pos:%f x:%d y:%d x+w:%d y+h:%d\n",
(int)s->var_values[VAR_N], s->var_values[VAR_T], s->var_values[VAR_POS],
av_log(ctx, AV_LOG_TRACE, "n:%d t:%f x:%d y:%d x+w:%d y+h:%d\n",
(int)s->var_values[VAR_N], s->var_values[VAR_T],
s->x, s->y, s->x+s->w, s->y+s->h);
if (desc->flags & AV_PIX_FMT_FLAG_HWACCEL) {

View File

@ -90,11 +90,15 @@ static const char *const var_names[] = {
"x",
"y",
"pict_type",
#if FF_API_FRAME_PKT
"pkt_pos",
#endif
#if FF_API_PKT_DURATION
"pkt_duration",
#endif
#if FF_API_FRAME_PKT
"pkt_size",
#endif
"duration",
NULL
};
@ -133,11 +137,15 @@ enum var_name {
VAR_X,
VAR_Y,
VAR_PICT_TYPE,
#if FF_API_FRAME_PKT
VAR_PKT_POS,
#endif
#if FF_API_PKT_DURATION
VAR_PKT_DURATION,
#endif
#if FF_API_FRAME_PKT
VAR_PKT_SIZE,
#endif
VAR_DURATION,
VAR_VARS_NB
};
@ -1654,7 +1662,12 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
NAN : frame->pts * av_q2d(inlink->time_base);
s->var_values[VAR_PICT_TYPE] = frame->pict_type;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
s->var_values[VAR_PKT_POS] = frame->pkt_pos;
s->var_values[VAR_PKT_SIZE] = frame->pkt_size;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
#if FF_API_PKT_DURATION
FF_DISABLE_DEPRECATION_WARNINGS
s->var_values[VAR_PKT_DURATION] = frame->pkt_duration * av_q2d(inlink->time_base);
@ -1665,7 +1678,6 @@ FF_DISABLE_DEPRECATION_WARNINGS
FF_ENABLE_DEPRECATION_WARNINGS
#endif
s->var_values[VAR_DURATION] = frame->duration * av_q2d(inlink->time_base);
s->var_values[VAR_PKT_SIZE] = frame->pkt_size;
s->metadata = frame->metadata;

View File

@ -221,7 +221,6 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
AVFilterLink *outlink = inlink->dst->outputs[0];
EQContext *eq = ctx->priv;
AVFrame *out;
int64_t pos = in->pkt_pos;
const AVPixFmtDescriptor *desc;
int i;
@ -235,7 +234,14 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
desc = av_pix_fmt_desc_get(inlink->format);
eq->var_values[VAR_N] = inlink->frame_count_out;
eq->var_values[VAR_POS] = pos == -1 ? NAN : pos;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
{
int64_t pos = in->pkt_pos;
eq->var_values[VAR_POS] = pos == -1 ? NAN : pos;
}
FF_ENABLE_DEPRECATION_WARNINGS
#endif
eq->var_values[VAR_T] = TS2T(in->pts, inlink->time_base);
if (eq->eval_mode == EVAL_MODE_FRAME) {

View File

@ -30,7 +30,9 @@
static const char *const var_names[] = {
"n", // frame count
#if FF_API_FRAME_PKT
"pos", // frame position
#endif
"r", // frame rate
"t", // timestamp expressed in seconds
NULL
@ -38,7 +40,9 @@ static const char *const var_names[] = {
enum var_name {
VAR_N,
#if FF_API_FRAME_PKT
VAR_POS,
#endif
VAR_R,
VAR_T,
VAR_NB

View File

@ -55,7 +55,9 @@ static const char *const var_names[] = {
"x",
"y",
"n", ///< number of frame
#if FF_API_FRAME_PKT
"pos", ///< position in the file
#endif
"t", ///< timestamp expressed in seconds
NULL
};
@ -290,7 +292,9 @@ static int config_input_overlay(AVFilterLink *inlink)
s->var_values[VAR_Y] = NAN;
s->var_values[VAR_N] = 0;
s->var_values[VAR_T] = NAN;
#if FF_API_FRAME_PKT
s->var_values[VAR_POS] = NAN;
#endif
if ((ret = set_expr(&s->x_pexpr, s->x_expr, "x", ctx)) < 0 ||
(ret = set_expr(&s->y_pexpr, s->y_expr, "y", ctx)) < 0)
@ -1007,12 +1011,18 @@ static int do_blend(FFFrameSync *fs)
return ff_filter_frame(ctx->outputs[0], mainpic);
if (s->eval_mode == EVAL_MODE_FRAME) {
int64_t pos = mainpic->pkt_pos;
s->var_values[VAR_N] = inlink->frame_count_out;
s->var_values[VAR_T] = mainpic->pts == AV_NOPTS_VALUE ?
NAN : mainpic->pts * av_q2d(inlink->time_base);
s->var_values[VAR_POS] = pos == -1 ? NAN : pos;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
{
int64_t pos = mainpic->pkt_pos;
s->var_values[VAR_POS] = pos == -1 ? NAN : pos;
}
FF_ENABLE_DEPRECATION_WARNINGS
#endif
s->var_values[VAR_OVERLAY_W] = s->var_values[VAR_OW] = second->width;
s->var_values[VAR_OVERLAY_H] = s->var_values[VAR_OH] = second->height;
@ -1020,8 +1030,8 @@ static int do_blend(FFFrameSync *fs)
s->var_values[VAR_MAIN_H ] = s->var_values[VAR_MH] = mainpic->height;
eval_expr(ctx);
av_log(ctx, AV_LOG_DEBUG, "n:%f t:%f pos:%f x:%f xi:%d y:%f yi:%d\n",
s->var_values[VAR_N], s->var_values[VAR_T], s->var_values[VAR_POS],
av_log(ctx, AV_LOG_DEBUG, "n:%f t:%f x:%f xi:%d y:%f yi:%d\n",
s->var_values[VAR_N], s->var_values[VAR_T],
s->var_values[VAR_X], s->x,
s->var_values[VAR_Y], s->y);
}

View File

@ -34,7 +34,9 @@ enum var_name {
VAR_X,
VAR_Y,
VAR_N,
#if FF_API_FRAME_PKT
VAR_POS,
#endif
VAR_T,
VAR_VARS_NB
};

View File

@ -68,7 +68,9 @@ enum var_name {
VAR_X,
VAR_Y,
VAR_N,
#if FF_API_FRAME_PKT
VAR_POS,
#endif
VAR_T,
VAR_VARS_NB
};
@ -87,7 +89,9 @@ static const char *const var_names[] = {
"x",
"y",
"n", ///< number of frame
#if FF_API_FRAME_PKT
"pos", ///< position in the file
#endif
"t", ///< timestamp expressed in seconds
NULL
};
@ -238,8 +242,6 @@ static int overlay_cuda_blend(FFFrameSync *fs)
AVFrame *input_main, *input_overlay;
int pos = 0;
ctx->cu_ctx = cuda_ctx;
// read main and overlay frames from inputs
@ -268,11 +270,19 @@ static int overlay_cuda_blend(FFFrameSync *fs)
}
if (ctx->eval_mode == EVAL_MODE_FRAME) {
pos = input_main->pkt_pos;
ctx->var_values[VAR_N] = inlink->frame_count_out;
ctx->var_values[VAR_T] = input_main->pts == AV_NOPTS_VALUE ?
NAN : input_main->pts * av_q2d(inlink->time_base);
ctx->var_values[VAR_POS] = pos == -1 ? NAN : pos;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
{
int64_t pos = input_main->pkt_pos;
ctx->var_values[VAR_POS] = pos == -1 ? NAN : pos;
}
FF_ENABLE_DEPRECATION_WARNINGS
#endif
ctx->var_values[VAR_OVERLAY_W] = ctx->var_values[VAR_OW] = input_overlay->width;
ctx->var_values[VAR_OVERLAY_H] = ctx->var_values[VAR_OH] = input_overlay->height;
ctx->var_values[VAR_MAIN_W ] = ctx->var_values[VAR_MW] = input_main->width;
@ -280,8 +290,8 @@ static int overlay_cuda_blend(FFFrameSync *fs)
eval_expr(avctx);
av_log(avctx, AV_LOG_DEBUG, "n:%f t:%f pos:%f x:%f xi:%d y:%f yi:%d\n",
ctx->var_values[VAR_N], ctx->var_values[VAR_T], ctx->var_values[VAR_POS],
av_log(avctx, AV_LOG_DEBUG, "n:%f t:%f x:%f xi:%d y:%f yi:%d\n",
ctx->var_values[VAR_N], ctx->var_values[VAR_T],
ctx->var_values[VAR_X], ctx->x_position,
ctx->var_values[VAR_Y], ctx->y_position);
}
@ -355,7 +365,9 @@ static int config_input_overlay(AVFilterLink *inlink)
s->var_values[VAR_Y] = NAN;
s->var_values[VAR_N] = 0;
s->var_values[VAR_T] = NAN;
#if FF_API_FRAME_PKT
s->var_values[VAR_POS] = NAN;
#endif
if ((ret = set_expr(&s->x_pexpr, s->x_expr, "x", ctx)) < 0 ||
(ret = set_expr(&s->y_pexpr, s->y_expr, "y", ctx)) < 0)

View File

@ -56,7 +56,9 @@ static const char *const var_names[] = {
"ovsub",
"n",
"t",
#if FF_API_FRAME_PKT
"pos",
#endif
"main_w",
"main_h",
"main_a",
@ -84,7 +86,9 @@ enum var_name {
VAR_OVSUB,
VAR_N,
VAR_T,
#if FF_API_FRAME_PKT
VAR_POS,
#endif
VAR_S2R_MAIN_W,
VAR_S2R_MAIN_H,
VAR_S2R_MAIN_A,
@ -205,7 +209,9 @@ static int check_exprs(AVFilterContext *ctx)
if (scale->eval_mode == EVAL_MODE_INIT &&
(vars_w[VAR_N] || vars_h[VAR_N] ||
vars_w[VAR_T] || vars_h[VAR_T] ||
#if FF_API_FRAME_PKT
vars_w[VAR_POS] || vars_h[VAR_POS] ||
#endif
vars_w[VAR_S2R_MAIN_N] || vars_h[VAR_S2R_MAIN_N] ||
vars_w[VAR_S2R_MAIN_T] || vars_h[VAR_S2R_MAIN_T] ||
vars_w[VAR_S2R_MAIN_POS] || vars_h[VAR_S2R_MAIN_POS]) ) {
@ -738,8 +744,16 @@ static int scale_frame(AVFilterLink *link, AVFrame *in, AVFrame **frame_out)
if (scale->eval_mode == EVAL_MODE_FRAME &&
!frame_changed &&
ctx->filter != &ff_vf_scale2ref &&
!(vars_w[VAR_N] || vars_w[VAR_T] || vars_w[VAR_POS]) &&
!(vars_h[VAR_N] || vars_h[VAR_T] || vars_h[VAR_POS]) &&
!(vars_w[VAR_N] || vars_w[VAR_T]
#if FF_API_FRAME_PKT
|| vars_w[VAR_POS]
#endif
) &&
!(vars_h[VAR_N] || vars_h[VAR_T]
#if FF_API_FRAME_PKT
|| vars_h[VAR_POS]
#endif
) &&
scale->w && scale->h)
goto scale;
@ -761,11 +775,19 @@ static int scale_frame(AVFilterLink *link, AVFrame *in, AVFrame **frame_out)
if (ctx->filter == &ff_vf_scale2ref) {
scale->var_values[VAR_S2R_MAIN_N] = link->frame_count_out;
scale->var_values[VAR_S2R_MAIN_T] = TS2T(in->pts, link->time_base);
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
scale->var_values[VAR_S2R_MAIN_POS] = in->pkt_pos == -1 ? NAN : in->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
} else {
scale->var_values[VAR_N] = link->frame_count_out;
scale->var_values[VAR_T] = TS2T(in->pts, link->time_base);
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
scale->var_values[VAR_POS] = in->pkt_pos == -1 ? NAN : in->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
}
link->dst->inputs[0]->format = in->format;
@ -915,7 +937,11 @@ static int filter_frame_ref(AVFilterLink *link, AVFrame *in)
if (scale->eval_mode == EVAL_MODE_FRAME) {
scale->var_values[VAR_N] = link->frame_count_out;
scale->var_values[VAR_T] = TS2T(in->pts, link->time_base);
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
scale->var_values[VAR_POS] = in->pkt_pos == -1 ? NAN : in->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
}
return ff_filter_frame(outlink, in);

View File

@ -84,7 +84,9 @@ static const char *const var_names[] = {
"dar",
"n",
"t",
#if FF_API_FRAME_PKT
"pos",
#endif
"main_w",
"main_h",
"main_a",
@ -92,7 +94,9 @@ static const char *const var_names[] = {
"main_dar", "mdar",
"main_n",
"main_t",
#if FF_API_FRAME_PKT
"main_pos",
#endif
NULL
};
@ -106,7 +110,9 @@ enum var_name {
VAR_DAR,
VAR_N,
VAR_T,
#if FF_API_FRAME_PKT
VAR_POS,
#endif
VAR_S2R_MAIN_W,
VAR_S2R_MAIN_H,
VAR_S2R_MAIN_A,
@ -114,7 +120,9 @@ enum var_name {
VAR_S2R_MAIN_DAR, VAR_S2R_MDAR,
VAR_S2R_MAIN_N,
VAR_S2R_MAIN_T,
#if FF_API_FRAME_PKT
VAR_S2R_MAIN_POS,
#endif
VARS_NB
};
@ -204,8 +212,11 @@ static int check_exprs(AVFilterContext* ctx)
vars_w[VAR_S2R_MAIN_DAR] || vars_h[VAR_S2R_MAIN_DAR] ||
vars_w[VAR_S2R_MDAR] || vars_h[VAR_S2R_MDAR] ||
vars_w[VAR_S2R_MAIN_N] || vars_h[VAR_S2R_MAIN_N] ||
vars_w[VAR_S2R_MAIN_T] || vars_h[VAR_S2R_MAIN_T] ||
vars_w[VAR_S2R_MAIN_POS] || vars_h[VAR_S2R_MAIN_POS])) {
vars_w[VAR_S2R_MAIN_T] || vars_h[VAR_S2R_MAIN_T]
#if FF_API_FRAME_PKT
|| vars_w[VAR_S2R_MAIN_POS] || vars_h[VAR_S2R_MAIN_POS]
#endif
)) {
av_log(ctx, AV_LOG_ERROR, "Expressions with scale2ref_npp variables are not valid in scale_npp filter.\n");
return AVERROR(EINVAL);
}
@ -213,11 +224,16 @@ static int check_exprs(AVFilterContext* ctx)
if (scale->eval_mode == EVAL_MODE_INIT &&
(vars_w[VAR_N] || vars_h[VAR_N] ||
vars_w[VAR_T] || vars_h[VAR_T] ||
#if FF_API_FRAME_PKT
vars_w[VAR_POS] || vars_h[VAR_POS] ||
#endif
vars_w[VAR_S2R_MAIN_N] || vars_h[VAR_S2R_MAIN_N] ||
vars_w[VAR_S2R_MAIN_T] || vars_h[VAR_S2R_MAIN_T] ||
vars_w[VAR_S2R_MAIN_POS] || vars_h[VAR_S2R_MAIN_POS]) ) {
av_log(ctx, AV_LOG_ERROR, "Expressions with frame variables 'n', 't', 'pos' are not valid in init eval_mode.\n");
vars_w[VAR_S2R_MAIN_T] || vars_h[VAR_S2R_MAIN_T]
#if FF_API_FRAME_PKT
|| vars_w[VAR_S2R_MAIN_POS] || vars_h[VAR_S2R_MAIN_POS]
#endif
) ) {
av_log(ctx, AV_LOG_ERROR, "Expressions with frame variables 'n', 't', are not valid in init eval_mode.\n");
return AVERROR(EINVAL);
}
@ -790,9 +806,16 @@ static int nppscale_scale(AVFilterLink *link, AVFrame *out, AVFrame *in)
av_expr_count_vars(s->h_pexpr, vars_h, VARS_NB);
if (s->eval_mode == EVAL_MODE_FRAME && !frame_changed && ctx->filter != &ff_vf_scale2ref_npp &&
!(vars_w[VAR_N] || vars_w[VAR_T] || vars_w[VAR_POS]) &&
!(vars_h[VAR_N] || vars_h[VAR_T] || vars_h[VAR_POS]) &&
s->w && s->h)
!(vars_w[VAR_N] || vars_w[VAR_T]
#if FF_API_FRAME_PKT
|| vars_w[VAR_POS]
#endif
) &&
!(vars_h[VAR_N] || vars_h[VAR_T]
#if FF_API_FRAME_PKT
|| vars_h[VAR_POS]
#endif
) && s->w && s->h)
goto scale;
if (s->eval_mode == EVAL_MODE_INIT) {
@ -813,11 +836,19 @@ static int nppscale_scale(AVFilterLink *link, AVFrame *out, AVFrame *in)
if (ctx->filter == &ff_vf_scale2ref_npp) {
s->var_values[VAR_S2R_MAIN_N] = link->frame_count_out;
s->var_values[VAR_S2R_MAIN_T] = TS2T(in->pts, link->time_base);
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
s->var_values[VAR_S2R_MAIN_POS] = in->pkt_pos == -1 ? NAN : in->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
} else {
s->var_values[VAR_N] = link->frame_count_out;
s->var_values[VAR_T] = TS2T(in->pts, link->time_base);
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
s->var_values[VAR_POS] = in->pkt_pos == -1 ? NAN : in->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
}
link->format = in->format;
@ -932,7 +963,11 @@ static int nppscale_filter_frame_ref(AVFilterLink *link, AVFrame *in)
if (scale->eval_mode == EVAL_MODE_FRAME) {
scale->var_values[VAR_N] = link->frame_count_out;
scale->var_values[VAR_T] = TS2T(in->pts, link->time_base);
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
scale->var_values[VAR_POS] = in->pkt_pos == -1 ? NAN : in->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
}
return ff_filter_frame(outlink, in);

View File

@ -714,12 +714,11 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *frame)
av_log(ctx, AV_LOG_INFO,
"n:%4"PRId64" pts:%7s pts_time:%-7s duration:%7"PRId64
" duration_time:%-7s pos:%9"PRId64" "
" duration_time:%-7s "
"fmt:%s sar:%d/%d s:%dx%d i:%c iskey:%d type:%c ",
inlink->frame_count_out,
av_ts2str(frame->pts), av_ts2timestr(frame->pts, &inlink->time_base),
frame->duration, av_ts2timestr(frame->duration, &inlink->time_base),
frame->pkt_pos,
desc->name,
frame->sample_aspect_ratio.num, frame->sample_aspect_ratio.den,
frame->width, frame->height,

View File

@ -64,8 +64,16 @@ static int query_formats(AVFilterContext *ctx)
return ff_set_common_formats(ctx, ff_formats_pixdesc_filter(0, reject_flags));
}
static const char *const var_names[] = { "w", "h", "a", "n", "t", "pos", "sar", "dar", NULL };
enum { VAR_W, VAR_H, VAR_A, VAR_N, VAR_T, VAR_POS, VAR_SAR, VAR_DAR, VAR_VARS_NB };
static const char *const var_names[] = { "w", "h", "a", "n", "t",
#if FF_API_FRAME_PKT
"pos",
#endif
"sar", "dar", NULL };
enum { VAR_W, VAR_H, VAR_A, VAR_N, VAR_T,
#if FF_API_FRAME_PKT
VAR_POS,
#endif
VAR_SAR, VAR_DAR, VAR_VARS_NB };
static int filter_frame(AVFilterLink *inlink, AVFrame *in)
{
@ -90,7 +98,11 @@ static int filter_frame(AVFilterLink *inlink, AVFrame *in)
var_values[VAR_DAR] = var_values[VAR_A] * var_values[VAR_SAR];
var_values[VAR_N] = inlink->frame_count_out;
var_values[VAR_T] = in->pts == AV_NOPTS_VALUE ? NAN : in->pts * av_q2d(inlink->time_base);
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
var_values[VAR_POS] = in->pkt_pos == -1 ? NAN : in->pkt_pos;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
ret = av_expr_parse_and_eval(&dw, s->w,
var_names, &var_values[0],

View File

@ -48,8 +48,12 @@ FF_DISABLE_DEPRECATION_WARNINGS
frame->pkt_duration = 0;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
frame->pkt_pos = -1;
frame->pkt_size = -1;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
frame->time_base = (AVRational){ 0, 1 };
frame->key_frame = 1;
frame->sample_aspect_ratio = (AVRational){ 0, 1 };
@ -279,8 +283,12 @@ static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy)
dst->sample_rate = src->sample_rate;
dst->opaque = src->opaque;
dst->pkt_dts = src->pkt_dts;
#if FF_API_FRAME_PKT
FF_DISABLE_DEPRECATION_WARNINGS
dst->pkt_pos = src->pkt_pos;
dst->pkt_size = src->pkt_size;
FF_ENABLE_DEPRECATION_WARNINGS
#endif
#if FF_API_PKT_DURATION
FF_DISABLE_DEPRECATION_WARNINGS
dst->pkt_duration = src->pkt_duration;

View File

@ -622,12 +622,17 @@ typedef struct AVFrame {
*/
int64_t best_effort_timestamp;
#if FF_API_FRAME_PKT
/**
* reordered pos from the last AVPacket that has been input into the decoder
* - encoding: unused
* - decoding: Read by user.
* @deprecated use AV_CODEC_FLAG_COPY_OPAQUE to pass through arbitrary user
* data from packets to frames
*/
attribute_deprecated
int64_t pkt_pos;
#endif
#if FF_API_PKT_DURATION
/**
@ -673,14 +678,19 @@ typedef struct AVFrame {
int channels;
#endif
#if FF_API_FRAME_PKT
/**
* size of the corresponding packet containing the compressed
* frame.
* It is set to a negative value if unknown.
* - encoding: unused
* - decoding: set by libavcodec, read by user.
* @deprecated use AV_CODEC_FLAG_COPY_OPAQUE to pass through arbitrary user
* data from packets to frames
*/
attribute_deprecated
int pkt_size;
#endif
/**
* For hwaccel-format frames, this should be a reference to the

View File

@ -114,6 +114,7 @@
#define FF_API_REORDERED_OPAQUE (LIBAVUTIL_VERSION_MAJOR < 59)
#define FF_API_FRAME_PICTURE_NUMBER (LIBAVUTIL_VERSION_MAJOR < 59)
#define FF_API_HDR_VIVID_THREE_SPLINE (LIBAVUTIL_VERSION_MAJOR < 59)
#define FF_API_FRAME_PKT (LIBAVUTIL_VERSION_MAJOR < 59)
/**
* @}