Authors: Yujia Sun, Geng Chen, Tao Zhou, Yi Zhang, and Nian Liu.
The training and testing experiments are conducted using PyTorch with a single NVIDIA Tesla P40 GPU of 24 GB Memory.
Configuring your environment (Prerequisites):
Creating a virtual environment in terminal: conda create -n C2FNet python=3.6
.
Installing necessary packages: pip install -r requirements.txt
.
Downloading necessary data:
downloading testing dataset and move it into ./data/TestDataset/
,
which can be found in this download link (Google Drive).
downloading training dataset and move it into ./data/TrainDataset/
,
which can be found in this download link (Google Drive).
downloading pretrained weights and move it into ./checkpoints/C2FNet40/C2FNet-39.pth
,
which can be found in this download link (Google Drive).
downloading Res2Net weights and move it into ./models/res2net50_v1b_26w_4s-3cf99910.pth
,
which can be found in this download link (Google Drive).
Training Configuration:
--train_save
and --train_path
in MyTrain.py
.Testing Configuration:
MyTest.py
to generate the final prediction map:
replace your trained model directory (--pth_path
).One-key evaluation is written in MATLAB code (revised from link),
please follow this the instructions in ./eval/main.m
and just run it to generate the evaluation results in.
If you want to speed up the evaluation on GPU, you just need to use the efficient tool link by pip install pysodmetrics
.
Assigning your costumed path, like method
, mask_root
and pred_root
in eval.py
.
Just run eval.py
to evaluate the trained model.
pre-computed map can be found in download link.
Please cite our paper if you find the work useful:
@inproceedings{sun2021c2fnet,
title={Context-aware Cross-level Fusion Network for Camouflaged Object Detection},
author={Sun, Yujia and Chen, Geng and Zhou, Tao and Zhang, Yi and Liu, Nian},
booktitle={IJCAI},
pages = "1025--1031",
year={2021}
}
@article{chen2022camouflaged,
title={Camouflaged Object Detection via Context-aware Cross-level Fusion},
author={Chen, Geng and Liu, Si-Jie and Sun, Yu-Jia and Ji, Ge-Peng and Wu, Ya-Feng and Zhou, Tao},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2022},
publisher={IEEE}
}