This repository contains the source code for our paper:
Our code has been successfully tested in the following environments:
conda create -n splatflow python=3.8
conda activate splatflow
pip install torch==1.8.2 torchvision==0.9.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cu111 pip install einops==0.4.1 pip install cupy-cuda111 pip install pillow==9.5.0 pip install opencv-python==4.1.2.30
## Quick start
To make the model (with [weights](https://pan.baidu.com/s/1v3WiEzkAXPtchVxEDu-vRw&pwd=sm11) after K-finetune) infer on KITTI data, run
```Shell
bash script/demo.sh
To train / test SplatFlow, you will need to download the required datasets.
You can create symbolic links to wherever the datasets are downloaded in the data
folder.
data/
│
├─ FlyingThings3D/
│ ├─ frames_cleanpass/
│ ├─ frames_finalpass/
│ └─ optical_flow/
│
├─ KITTI/
│ ├─ training/
│ └─ testing/
│
└─ demo/
├─ image/
└─ pred/
bash script/train_things.sh
Test SplatFlow on Things.
bash script/test_things.sh
Test SplatFlow on KITTI.
bash script/test_kitti.sh
We would like to thank RAFT, GMA and SoftSplat for publicly releasing their code and data.
If you find our repository useful, please consider giving it a star ⭐ and citing our paper in your work:
@article{wang2024splatflow,
title={SplatFlow: Learning Multi-frame Optical Flow via Splatting},
author={Wang, Bo and Zhang, Yifan and Li, Jian and Yu, Yang and Sun, Zhenping and Liu, Li and Hu, Dewen},
journal={International Journal of Computer Vision},
pages={1--23},
year={2024},
publisher={Springer}
}