SHI-Labs / OneFormer

OneFormer: One Transformer to Rule Universal Image Segmentation, arxiv 2022 / CVPR 2023
https://praeclarumjj3.github.io/oneformer
MIT License
1.41k stars 128 forks source link

Calculate and Compare Panoptic Quality With Paper Results #43

Closed 007IsWorking closed 1 year ago

007IsWorking commented 1 year ago

Hi! I plan to compare Oneformer's panoptic quality results with the results stated in the paper for the COCO dataset. Initially, I need to convert oneformer output to .json coco format and then use panopticapi to evaluate Panoptic Quality. I do not know if this is correct but I tried yet I could not succeed. Could you please tell me the accurate steps and proper links to perform this task? I am using Oneformer's colab. Thank you in anticipation.

praeclarumjj3 commented 1 year ago

Hi @007IsWorking, thanks for your interest in our work.

We already provide instructions for the evaluation process. Please run the following command to evaluate for the panoptic task on colab:

python train_net.py --dist-url 'tcp://127.0.0.1:50164' \
    --num-gpus 1 \
    --config-file <path-to-config> \
    --eval-only MODEL.IS_TRAIN False MODEL.WEIGHTS <path-to-checkpoint> \
    MODEL.TEST.TASK panoptic
rbavery commented 1 year ago

Do you have recommendations for using this code base to finetune oneformer on a custom dataset? vs the Hugginface model?

praeclarumjj3 commented 1 year ago

Hi @rbavery. you can use whichever you feel is more comfortable for you. We have never trained a model with the HF transformers codebase. Still, the performance should be similar to our original codebase, as we tested every component while integrating OneFormer into the transformers library. You may also look at the guide for training on custom datasets with Detectron2.

praeclarumjj3 commented 1 year ago

Closing this, feel free to re-open if you have more questions