Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang
CVPR 2021
[arXiv] [Paper PDF] [Project Page] [Demo] [Papers with Code] [Supplementary Material]
Credit (left to right): DAVIS 2017, Academy of Historical Fencing, Modern History TV
We manage the project using three different repositories (which are actually in the paper title). This is the main repo, see also Mask-Propagation and Scribble-to-Mask.
MiVOS | Mask-Propagation | Scribble-to-Mask | |
---|---|---|---|
DAVIS/YouTube semi-supervised evaluation | :x: | :heavy_check_mark: | :x: |
DAVIS interactive evaluation | :heavy_check_mark: | :x: | :x: |
User interaction GUI tool | :heavy_check_mark: | :x: | :x: |
Dense Correspondences | :x: | :heavy_check_mark: | :x: |
Train propagation module | :x: | :heavy_check_mark: | :x: |
Train S2M (interaction) module | :x: | :x: | :heavy_check_mark: |
Train fusion module | :heavy_check_mark: | :x: | :x: |
Generate more synthetic data | :heavy_check_mark: | :x: | :x: |
We used these packages/versions in the development of this project. It is likely that higher versions of the same package will also work. This is not an exhaustive list -- other common python packages (e.g. pillow) are expected and not listed.
1.7.1
0.8.2
4.2.0
2.4
for DAVISRefer to the official PyTorch guide for installing PyTorch/torchvision. The rest can be installed by:
pip install PyQt5 davisinteractive progressbar2 opencv-python networkx gitpython gdown Cython
python download_model.py
to get all the required models.python interactive_gui.py --video <path to video>
or python interactive_gui.py --images <path to a folder of images>
. A video has been prepared for you at example/example.mp4
.--num_objects <number_of_objects>
. See all the argument options with python interactive_gui.py --help
.See eval_interactive_davis.py
. If you have downloaded the datasets and pretrained models using our script, you only need to specify the output path, i.e., python eval_interactive_davis.py --output [somewhere]
.
Go to this repo: Mask-Propagation.
All results are generated using the unmodified official DAVIS interactive bot without saving masks (--save_mask
not specified) and with an RTX 2080Ti. We follow the official protocol.
Precomputed result, with the json summary: [Google Drive] [OneDrive]
eval_interactive_davis.py
Model | AUC-J&F | J&F @ 60s |
---|---|---|
Baseline | 86.0 | 86.6 |
(+) Top-k | 87.2 | 87.8 |
(+) BL30K pretraining | 87.4 | 88.0 |
(+) Learnable fusion | 87.6 | 88.2 |
(+) Difference-aware fusion (full model) | 87.9 | 88.5 |
Full model, without BL30K for propagation/fusion | 87.4 | 88.0 |
Full model, STCN backbone | 88.4 | 88.8 |
python download_model.py
should get you all the models that you need. (pip install gdown
required.)
Datasets should be arranged as the following layout. You can use download_datasets.py
(same as the one Mask-Propagation) to get the DAVIS dataset and manually download and extract fusion_data ([OneDrive]) and BL30K.
├── BL30K
├── DAVIS
│ └── 2017
│ ├── test-dev
│ │ ├── Annotations
│ │ └── ...
│ └── trainval
│ ├── Annotations
│ └── ...
├── fusion_data
└── MiVOS
BL30K is a synthetic dataset rendered using Blender with ShapeNet's data. We break the dataset into six segments, each with approximately 5K videos.
The videos are organized in a similar format as DAVIS and YouTubeVOS, so dataloaders for those datasets can be used directly. Each video is 160 frames long, and each frame has a resolution of 768*512. There are 3-5 objects per video, and each object has a random smooth trajectory -- we tried to optimize the trajectories greedily to minimize object intersection (not guaranteed), with occlusions still possible (happen a lot in reality). See generation/blender/generate_yaml.py
for details.
We noted that using probably half of the data is sufficient to reach full performance (although we still used all), but using less than one-sixth (5K) is insufficient.
You can either use the automatic script download_bl30k.py
or download it manually below. Note that each segment is about 115GB in size -- 700GB in total. You are going to need ~1TB of free disk space to run the script (including extraction buffer).
Google Drive is much faster in my experience. Your mileage might vary.
Manual download: [Google Drive] [OneDrive]
Note: Google might block the Google Drive link. You can 1) make a shortcut of the folder to your own Google Drive, and 2) use rclone
to copy from your own Google Drive (would not count towards your storage limit).
[UST Mirror] (Reliability not guaranteed, speed throttled, do not use if others are available): <ckcpu1.cse.ust.hk:8080/MiVOS/BL30K_{a-f}.tar> (Replace {a-f} with the part that you need).
MD5 Checksum:
35312550b9a75467b60e3b2be2ceac81 BL30K_a.tar
269e2f9ad34766b5f73fa117166c1731 BL30K_b.tar
a3f7c2a62028d0cda555f484200127b9 BL30K_c.tar
e659ed7c4e51f4c06326855f4aba8109 BL30K_d.tar
d704e86c5a6a9e920e5e84996c2e0858 BL30K_e.tar
bf73914d2888ad642bc01be60523caf6 BL30K_f.tar
We use the propagation module to run through some data and obtain real outputs to train the fusion module. See the script generate_fusion.py
.
Or you can download pre-generated fusion data: [Google Drive] [OneDrive]
These commands are to train the fusion module only.
CUDA_VISIBLE_DEVICES=[a,b] OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port [cccc] --nproc_per_node=2 train.py --id [defg] --stage [h]
We implemented training with Distributed Data Parallel (DDP) with two 11GB GPUs. Replace a, b
with the GPU ids, cccc
with an unused port number, defg
with a unique experiment identifier, and h
with the training stage (0/1).
The model is trained progressively with different stages (0: BL30K; 1: DAVIS). After each stage finishes, we start the next stage by loading the trained weight. A pretrained propagation model is required to train the fusion module.
One concrete example is:
Pre-training on the BL30K dataset: CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 7550 --nproc_per_node=2 train.py --load_prop saves/propagation_model.pth --stage 0 --id retrain_s0
Main training: CUDA_VISIBLE_DEVICES=0,1 OMP_NUM_THREADS=4 python -m torch.distributed.launch --master_port 7550 --nproc_per_node=2 train.py --load_prop saves/propagation_model.pth --stage 1 --id retrain_s012 --load_network [path_to_trained_s0.pth]
f-BRS: https://github.com/saic-vul/fbrs_interactive_segmentation
ivs-demo: https://github.com/seoungwugoh/ivs-demo
deeplab: https://github.com/VainF/DeepLabV3Plus-Pytorch
STM: https://github.com/seoungwugoh/STM
BlenderProc: https://github.com/DLR-RM/BlenderProc
Please cite our paper if you find this repo useful!
@inproceedings{cheng2021mivos,
title={Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion},
author={Cheng, Ho Kei and Tai, Yu-Wing and Tang, Chi-Keung},
booktitle={CVPR},
year={2021}
}
And if you want to cite the datasets:
Contact: hkchengrex@gmail.com