In this paper, we present continuous parametric optical flow, a parametric representation of dense and continuous motion over arbitrary time interval. In contrast to existing discrete-time representations (i.e.flow in between consecutive frames), this new representation transforms the frame-to-frame pixel correspondences to dense continuous flow. In particular, we present a temporal-parametric model that employs B-splines to fit point trajectories using a limited number of frames. To further improve the stability and robustness of the trajectories, we also add an encoder with a neural ordinary differential equation (ODE) to represent features associated with specific times. We also contribute a synthetic dataset and introduce two evaluation perspectives to measure the accuracy and robustness of continuous flow estimation. Benefiting from the combination of explicit parametric modeling and implicit feature optimization, our model focuses on motion continuity and outperforms than the flow-based and point-tracking approaches for fitting long-term and variable sequences.
Python 3.8.10 with basic conda environment. Install the requirements as follow:
pip install -r requirements.txt
Dataset Preparation
*.pkl
file into folder ./datasets/tap_vid_davis
*.pkl
files into folder ./datasets/tap_vid_kinetics
Metric Selection
Inference
python eval_real_scene.py --dataset_mode davis --method ade_rmse
Download the weight cp_flow_30.pth
and put it into folder ./checkpoints
We recommend parallel training with multiple GPUs. The dataset_dir
and save_path
denote the path where the training dataset is stored and the directory where the model weights are saved, respectively. A training sample is below:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --nnodes=1 --node_rank=0 --master_addr="YOUR_IP" --master_port="YOUR_PORT" kubric_sparse_train_all.py --dataset_dir "YOUR_DATASET_PATH" --save_path "YOUR_SAVE_PATH"
We also provide the sampler code for variable-length training. Please refer to kubric_dataset.py
.
Due to the large-scale transferring, we will upload the whole training dataset simulated by kubric later.
Thanks for the inspiration from the following work: