@inproceedings{OVCOS_ECCV2024,
title={Open-Vocabulary Camouflaged Object Segmentation},
author={Pang, Youwei and Zhao, Xiaoqi and Zuo, Jiaming and Zhang, Lihe and Lu, Huchuan},
booktitle=ECCV,
year={2024},
}
[!note]
Details of the proposed OVCamo dataset can be found in the document for our dataset.
env/splitted_ovcamo.yaml
:
OVCamo_TR_IMAGE_DIR
: Image directory of the training set.OVCamo_TR_MASK_DIR
: Mask directory of the training set.OVCamo_TR_DEPTH_DIR
: Depth map directory of the training set. Depth maps of the training set which are generated by us, can be downloaded from https://github.com/lartpang/OVCamo/releases/download/dataset-v1.0/depth-train-ovcoser.zipOVCamo_TE_IMAGE_DIR
: Image directory of the testing set.OVCamo_TE_MASK_DIR
: Mask directory of the testing set.OVCamo_CLASS_JSON_PATH
: Path of the json file class_info.json
storing class information of the proposed OVCamo.OVCamo_SAMPLE_JSON_PATH
: Path of the json file sample_info.json
storing sample information of the proposed OVCamo.pip install -r requirements.txt
.
torch
and torchvision
are listed in the comment of requirements.txt
.python .\main.py --config .\configs\ovcoser.py --model-name OVCoser
;python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from <path of the local .pth file.>
.python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from model.pth
.<path>/ovcoser-ovcamo-te
.python .\evaluate.py --pre <path>/ovcoser-ovcamo-te
OVCamo by Youwei Pang, Xiaoqi Zhao, Jiaming Zuo, Lihe Zhang, Huchuan Lu is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International