Open qqlu opened 6 years ago
I have solved those two problems.
However, I find it that if I evaluate the ground truth of validation dataset by the code. I only gain the 0.45 map(0.5-0.95), and I am sure there is no problem for my code.
Hi,
I am unable to generate text files of the desired format for the predictions. I have also generated COCO-like predictions in JSON file format. Could you shed light on how you managed to do this?
Hi, I work on instance segmentation. However, I find it hard to make evaluation by your code. It works well on coco-liking evaluation.
Firstly, how could I get gtFine_instanceids.png in 658 line on evaluate_instance_segmentation.py? createLabels.py is only able to make gtFine_labelids.png.
Secondly, what's the args.gtInstancesFile in 73 line on evaluate_instance_segmentaion.py?
Thanks very much.