Closed 007IsWorking closed 1 year ago
Hi @007IsWorking, thanks for your interest in our work.
We already provide instructions for the evaluation process. Please run the following command to evaluate for the panoptic task on colab:
python train_net.py --dist-url 'tcp://127.0.0.1:50164' \
--num-gpus 1 \
--config-file <path-to-config> \
--eval-only MODEL.IS_TRAIN False MODEL.WEIGHTS <path-to-checkpoint> \
MODEL.TEST.TASK panoptic
Do you have recommendations for using this code base to finetune oneformer on a custom dataset? vs the Hugginface model?
Hi @rbavery. you can use whichever you feel is more comfortable for you. We have never trained a model with the HF transformers codebase. Still, the performance should be similar to our original codebase, as we tested every component while integrating OneFormer into the transformers library. You may also look at the guide for training on custom datasets with Detectron2.
Closing this, feel free to re-open if you have more questions
Hi! I plan to compare Oneformer's panoptic quality results with the results stated in the paper for the COCO dataset. Initially, I need to convert oneformer output to .json coco format and then use panopticapi to evaluate Panoptic Quality. I do not know if this is correct but I tried yet I could not succeed. Could you please tell me the accurate steps and proper links to perform this task? I am using Oneformer's colab. Thank you in anticipation.