Closed ashnair1 closed 5 years ago
This was due to absence of certain classes in my validation data and due to the extremely low number of occurrences of these classes. Most likely explanation is that due to the mentioned reasons TP and FP could have become equal to zero resulting in dividing by zero while calculating precision.
I tried out the Mask RCNN framework and observed pretty good results. But when I try to calculate the mAP score via the test_net.py script, most of my classes are outputting nan. Would you happen to know why this is the case? The detections that the models produce on the test images looks pretty good hence my confusion regarding this error.
Edit: To verify whether it was an issue with the code, I tried working with another dataset of mine where there was only one class excluding background. The evaluation works fine there as can be seen below:
The issue is I can't seem to understand the cause of the problem. If the problem was due to the dataset, it would have shown in the detection results on the test images. Training and Testing the same code with another dataset indicates the problem may not be due to the code. I would appreciate any insight into this problem.
System information