SysCV / sam-hq

Segment Anything in High Quality [NeurIPS 2023]
https://arxiv.org/abs/2306.01567
Apache License 2.0
3.73k stars 224 forks source link

Request for evaluation code #113

Open jameslahm opened 11 months ago

jameslahm commented 11 months ago

Thank you for your great work! Would you mind sharing the evaluation code on COCO, YTVIS, HQ-YTVIS, and DAVIS? Thank you!

ymq2017 commented 11 months ago

Hi, we provide COCO evaluation code here. You can put it in the folder sam-hq/eval_coco and test on single or multi GPU.

We modify the evaluation code from Prompt-Segment-Anything. You can refer to their github page for downloading pretrained checkpoints sam-hq/eval_coco/ckpt and preparing environment and data sam-hq/eval_coco/data.

For example, using 1 or 8 GPU, you will get a baseline result of AP 48.5.

# 1 GPU
python tools/test.py projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py --eval -segm
# 8 GPUs
bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py 8 --eval segm

Changing the config to hq-sam, you will get ours result of AP 49.5.

bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l.py 8 --eval segm

Result is shown in Tab10 of our paper. image

jameslahm commented 11 months ago

@ymq2017 Thank you! Would you mind sharing the evaluation code on YTVIS, HQ-YTVIS, and DAVIS? Thanks a lot!

tg-Flipped commented 10 months ago

Hi, we provide COCO evaluation code here. You can put it in the folder sam-hq/eval_coco and test on single or multi GPU.

We modify the evaluation code from Prompt-Segment-Anything. You can refer to their github page for downloading pretrained checkpoints sam-hq/eval_coco/ckpt and preparing environment and data sam-hq/eval_coco/data.

For example, using 1 or 8 GPU, you will get a baseline result of AP 48.5.

# 1 GPU
python tools/test.py projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py --eval -segm
# 8 GPUs
bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py 8 --eval segm

Changing the config to hq-sam, you will get ours result of AP 49.5.

bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l.py 8 --eval segm

Result is shown in Tab10 of our paper. image

Hi authors, thanks for your great work. Could you provide the pre-trained checkpoint of FocalNet-DINO that you used. I think that i download the right checkpoint but i met the mismatch problem as follows. image

ymq2017 commented 10 months ago

Hi, we provide COCO evaluation code here. You can put it in the folder sam-hq/eval_coco and test on single or multi GPU. We modify the evaluation code from Prompt-Segment-Anything. You can refer to their github page for downloading pretrained checkpoints sam-hq/eval_coco/ckpt and preparing environment and data sam-hq/eval_coco/data. For example, using 1 or 8 GPU, you will get a baseline result of AP 48.5.

# 1 GPU
python tools/test.py projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py --eval -segm
# 8 GPUs
bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py 8 --eval segm

Changing the config to hq-sam, you will get ours result of AP 49.5.

bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l.py 8 --eval segm

Result is shown in Tab10 of our paper. image

Hi authors, thanks for your great work. Could you provide the pre-trained checkpoint of FocalNet-DINO that you used. I think that i download the right checkpoint but i met the mismatch problem as follows. image

Hi, we use this script for downloading the FocalNet-DINO checkpoint.

# FocalNet-L+DINO
cd ckpt
python -m wget https://projects4jw.blob.core.windows.net/focalnet/release/detection/focalnet_large_fl4_o365_finetuned_on_coco.pth -o focalnet_l_dino.pth
cd ..
python tools/convert_ckpt.py ckpt/focalnet_l_dino.pth ckpt/focalnet_l_dino.pth
Vickeyhw commented 10 months ago

@ymq2017 How much GPU memory is needed for evaluation? I try to evaluate using 'projects/configs/hdetr/swin-t-hdetr_sam-vit-b.py' , but meet the problem of out of memory on 10GB 2080Ti.

awmooo commented 2 months ago

Hi, we provide COCO evaluation code here. You can put it in the folder sam-hq/eval_coco and test on single or multi GPU. We modify the evaluation code from Prompt-Segment-Anything. You can refer to their github page for downloading pretrained checkpoints sam-hq/eval_coco/ckpt and preparing environment and data sam-hq/eval_coco/data. For example, using 1 or 8 GPU, you will get a baseline result of AP 48.5.

# 1 GPU
python tools/test.py projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py --eval -segm
# 8 GPUs
bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l-baseline.py 8 --eval segm

Changing the config to hq-sam, you will get ours result of AP 49.5.

bash tools/dist_test.sh projects/configs/focalnet_dino/focalnet-l-dino_sam-vit-l.py 8 --eval segm

Result is shown in Tab10 of our paper. image

Hi authors, thanks for your great work. Could you provide the pre-trained checkpoint of FocalNet-DINO that you used. I think that i download the right checkpoint but i met the mismatch problem as follows. image

Hi, we use this script for downloading the FocalNet-DINO checkpoint.

# FocalNet-L+DINO
cd ckpt
python -m wget https://projects4jw.blob.core.windows.net/focalnet/release/detection/focalnet_large_fl4_o365_finetuned_on_coco.pth -o focalnet_l_dino.pth
cd ..
python tools/convert_ckpt.py ckpt/focalnet_l_dino.pth ckpt/focalnet_l_dino.pth

this checkpoint is unavailable, this script pops this error : urllib.error.HTTPError: HTTP Error 409: Public access is not permitted on this storage account. this issue is also in https://github.com/RockeyCoss/Prompt-Segment-Anything/issues/10 Could you provide this checkpoint file on other link?thanks.