Closed maryamkqamar closed 1 year ago
Hi @maryamkqamar, we already share the command to evaluate OneFormer. Have you tried running that?
Thanks for getting back so quickly. Yes I have used that command and it is working. I meant to ask like say I have some predictions (like the ones which are saved in output folder, coco_instance_predictions.pth, I suppose these are the model predictions for which we get the metric results while using evaluation command) then which function I can call to just calculate the metrics for those against the ground truth. I want to do this for all semantic, instance and panoptic segmentation.
Hi @maryamkqamar, the evaluators already save the predictions in an output_dir
(if its parameter is defined) before calculating the metric scores. If you want to use those for calculating the metric scores, you can write your custom evaluator that loads the results from a saved predictions file and then calculates the metric scores.
For example, if task=instance
, evaluator is InstanceSegEvaluator
for ade20k dataset. https://github.com/SHI-Labs/OneFormer/blob/7145cdaeea50968239055bcb0618d32bd306590f/train_net.py#L145
The results are saved to a JSON file. https://github.com/SHI-Labs/OneFormer/blob/7145cdaeea50968239055bcb0618d32bd306590f/oneformer/evaluation/instance_evaluation.py#L75-L80
The results are passed as an argument to an evaluation function to calculate the metric scores. https://github.com/SHI-Labs/OneFormer/blob/7145cdaeea50968239055bcb0618d32bd306590f/oneformer/evaluation/instance_evaluation.py#L93-L110
In short, you can define your custom evaluator classes accordingly for all three tasks looking at the base evaluator classes.
Ah thank you for the detailed answer, I will check those files.
Hi, thank you for the code. I wonder is there a way to evaluate the saved predictions?