SHI-Labs / OneFormer

OneFormer: One Transformer to Rule Universal Image Segmentation, arxiv 2022 / CVPR 2023
https://praeclarumjj3.github.io/oneformer
MIT License
1.41k stars 128 forks source link

Evaluating ade20k predictions against ground truth #62

Closed maryamkqamar closed 1 year ago

maryamkqamar commented 1 year ago

Hi, thank you for the code. I wonder is there a way to evaluate the saved predictions?

praeclarumjj3 commented 1 year ago

Hi @maryamkqamar, we already share the command to evaluate OneFormer. Have you tried running that?

maryamkqamar commented 1 year ago

Thanks for getting back so quickly. Yes I have used that command and it is working. I meant to ask like say I have some predictions (like the ones which are saved in output folder, coco_instance_predictions.pth, I suppose these are the model predictions for which we get the metric results while using evaluation command) then which function I can call to just calculate the metrics for those against the ground truth. I want to do this for all semantic, instance and panoptic segmentation.

praeclarumjj3 commented 1 year ago

Hi @maryamkqamar, the evaluators already save the predictions in an output_dir (if its parameter is defined) before calculating the metric scores. If you want to use those for calculating the metric scores, you can write your custom evaluator that loads the results from a saved predictions file and then calculates the metric scores.

For example, if task=instance, evaluator is InstanceSegEvaluator for ade20k dataset. https://github.com/SHI-Labs/OneFormer/blob/7145cdaeea50968239055bcb0618d32bd306590f/train_net.py#L145

The results are saved to a JSON file. https://github.com/SHI-Labs/OneFormer/blob/7145cdaeea50968239055bcb0618d32bd306590f/oneformer/evaluation/instance_evaluation.py#L75-L80

The results are passed as an argument to an evaluation function to calculate the metric scores. https://github.com/SHI-Labs/OneFormer/blob/7145cdaeea50968239055bcb0618d32bd306590f/oneformer/evaluation/instance_evaluation.py#L93-L110

In short, you can define your custom evaluator classes accordingly for all three tasks looking at the base evaluator classes.

maryamkqamar commented 1 year ago

Ah thank you for the detailed answer, I will check those files.