bo-miao / MAMP

[ICME 2022] Self-Supervised Video Object Segmentation by Motion-Aware Mask Propagation.
BSD 3-Clause "New" or "Revised" License
32 stars 6 forks source link

arXiv

Self-Supervised Video Object Segmentation by Motion-Aware Mask Propagation (MAMP)

This repository contains the source code (PyTorch) for our paper:

Self-Supervised Video Object Segmentation by Motion-Aware Mask Propagation

Requirements

The code has been trained and tested with PyTorch 1.9 (1.9.0a0+gitc91c4a0), Python 3.9, and Cuda 11.2.

Other dependencies could be installed by running:

pip install -r requirements.txt

Required Data

To evaluate/train MAMP, you will need to download the required datasets.

You can create symbolic links to wherever the datasets were downloaded in the datasets folder

├── datasets
    ├── DEMO
        ├── valid_demo
            ├── Annotations
            ├── JPEGImages       
    ├── DAVIS
        ├── JPEGImages
        ├── Annotations
        ├── ImageSets
    ├── YOUTUBE
        ├── train
        ├── valid
        ├── all (the data is from train_all_frames)
            ├── videos
                ├── consecutive frames

Demo

Train

Test and evaluation

Citation

If you find the paper, code, or pre-trained models useful, please cite our papers:

@InProceedings{Miao2022mamp,
  author        = {Bo Miao and Mohammed Bennamoun and Yongsheng Gao and Ajmal Mian},
  title         = {Self-Supervised Video Object Segmentation by Motion-Aware Mask Propagation},
  booktitle     = {IEEE International Conference on Multimedia and Expo (ICME)},
  year          = {2022},
  organization  = {IEEE}
}

(Optional)

Results

Comparison with other methods on DAVIS-2017
Results on DAVIS-2017 Results on YouTube-VOS

Licenses

This repo contains third party code. It is your responsibility to ensure you comply with license here and conditions of any dependent licenses.

img_uwa img_gu