Go to file
hzwer cc612e9d45
Update inference_img.py
2022-05-05 14:30:49 +08:00
benchmark Update HD_multi_4X.py 2021-11-15 23:41:25 +08:00
demo Add depth slomo demo 2021-11-15 13:21:22 +08:00
docker Dockerfile now uses the same train_log folder as the scripts 2021-01-17 16:07:54 +01:00
model Update RIFE.py 2022-04-27 11:47:55 +08:00
.gitignore Release v3 model 2021-05-15 16:55:28 +08:00
Colab_demo.ipynb Updated Colab to use RIFE_trained_model_v3.6.zip 2022-02-11 18:34:19 +02:00
LICENSE Initial commit 2020-11-12 13:46:25 +08:00
README.md Update README.md 2022-04-15 15:45:55 +08:00
dataset.py Update dataset.py 2022-04-11 11:43:19 +08:00
inference_img.py Update inference_img.py 2022-05-05 14:30:49 +08:00
inference_video.py Fix unstop bug 2021-11-11 16:06:33 +08:00
requirements.txt Update requirements.txt 2021-10-07 23:18:38 +08:00
train.py Update train.py 2022-04-15 11:30:05 +08:00

README.md

RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation

YouTube | BiliBili | Colab | Tutorial

Pinned Software: RIFE-App | FlowFrames | SVFI (中文)

16X interpolation results from two input images:

Demo Demo

Introduction

This project is the implement of RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation. Currently, our model can run 30+FPS for 2X 720p interpolation on a 2080Ti GPU. It supports arbitrary-timestep interpolation between a pair of images.

Software

Flowframes | SVFI(中文) | Waifu2x-Extension-GUI | Autodesk Flame | SVP

RIFE-App(Paid) | Steam-VFI(Paid)

We are not responsible for and participating in the development of above software. According to the open source license, we respect the commercial behavior of other developers.

VapourSynth-RIFE | RIFE-ncnn-vulkan | VapourSynth-RIFE-ncnn-Vulkan

If you are a developer, welcome to follow Practical-RIFE, which aims to make RIFE more practical for users by adding various features and design new models with faster speed.

CLI Usage

Installation

git clone git@github.com:hzwer/arXiv2021-RIFE.git
cd arXiv2021-RIFE
pip3 install -r requirements.txt

Run

Video Frame Interpolation

You can use our demo video or your own video.

python3 inference_video.py --exp=1 --video=video.mp4 

(generate video_2X_xxfps.mp4)

python3 inference_video.py --exp=2 --video=video.mp4

(for 4X interpolation)

python3 inference_video.py --exp=1 --video=video.mp4 --scale=0.5

(If your video has very high resolution such as 4K, we recommend set --scale=0.5 (default 1.0). If you generate disordered pattern on your videos, try set --scale=2.0. This parameter control the process resolution for optical flow model.)

python3 inference_video.py --exp=2 --img=input/

(to read video from pngs, like input/0.png ... input/612.png, ensure that the png names are numbers)

python3 inference_video.py --exp=2 --video=video.mp4 --fps=60

(add slomo effect, the audio will be removed)

python3 inference_video.py --video=video.mp4 --montage --png

(if you want to montage the origin video and save the png format output)

Image Interpolation

python3 inference_img.py --img img0.png img1.png --exp=4

(2^4=16X interpolation results) After that, you can use pngs to generate mp4:

ffmpeg -r 10 -f image2 -i output/img%d.png -s 448x256 -c:v libx264 -pix_fmt yuv420p output/slomo.mp4 -q:v 0 -q:a 0

You can also use pngs to generate gif:

ffmpeg -r 10 -f image2 -i output/img%d.png -s 448x256 -vf "split[s0][s1];[s0]palettegen=stats_mode=single[p];[s1][p]paletteuse=new=1" output/slomo.gif

Run in docker

Place the pre-trained models in train_log/\*.pkl (as above)

Building the container:

docker build -t rife -f docker/Dockerfile .

Running the container:

docker run --rm -it -v $PWD:/host rife:latest inference_video --exp=1 --video=untitled.mp4 --output=untitled_rife.mp4
docker run --rm -it -v $PWD:/host rife:latest inference_img --img img0.png img1.png --exp=4

Using gpu acceleration (requires proper gpu drivers for docker):

docker run --rm -it --gpus all -v /dev/dri:/dev/dri -v $PWD:/host rife:latest inference_video --exp=1 --video=untitled.mp4 --output=untitled_rife.mp4

Evaluation

Download RIFE model or RIFE_m model reported by our paper.

UCF101: Download UCF101 dataset at ./UCF101/ucf101_interp_ours/

Vimeo90K: Download Vimeo90K dataset at ./vimeo_interp_test

MiddleBury: Download MiddleBury OTHER dataset at ./other-data and ./other-gt-interp

HD: Download HD dataset at ./HD_dataset. We also provide a google drive download link.

# RIFE
python3 benchmark/UCF101.py
# "PSNR: 35.282 SSIM: 0.9688"
python3 benchmark/Vimeo90K.py
# "PSNR: 35.615 SSIM: 0.9779"
python3 benchmark/MiddleBury_Other.py
# "IE: 1.956"
python3 benchmark/HD.py
# "PSNR: 32.14"

# RIFE_m
python3 benchmark/HD_multi_4X.py
# "PSNR: 22.96(544*1280), 31.87(720p), 34.25(1080p)"

Training and Reproduction

Download Vimeo90K dataset.

We use 16 CPUs, 4 GPUs and 20G memory for training:

python3 -m torch.distributed.launch --nproc_per_node=4 train.py --world_size=4

Revision History

First of all, we are sorry for the troubles caused by multiple submission versions to the follower. We will not modify the weight and method of the model baseline anymore. We also tried our best to check the test results of all other methods. You are welcome to cite our results.

Major Revisions

2021.3.18 arXiv: Modify the main experimental data, especially the runtime related issues.

2021.8.12 arXiv: Remove pretrained model dependency and propose privileged distillation scheme for frame interpolation.

2021.11.17 arXiv: Support arbitrary-time frame interpolation, aka RIFEm and add more experiments.

Citation

@article{huang2021rife,
  title={RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation},
  author={Huang, Zhewei and Zhang, Tianyuan and Heng, Wen and Shi, Boxin and Zhou, Shuchang},
  journal={arXiv preprint arXiv:2011.06294},
  year={2021}
}

Reference

Optical Flow: ARFlow pytorch-liteflownet RAFT pytorch-PWCNet

Video Interpolation: DVF TOflow SepConv DAIN CAIN MEMC-Net SoftSplat BMBC EDSC EQVI

Sponsor

Many thanks to Grisk.

感谢支持 Paypal Sponsor: https://www.paypal.com/paypalme/hzwer

imageimage