I trained the model with my custom dataset, 2 classes(including background), the config file is retinanet_R-50-FPN_1X.yaml. And I got the test result like this:
INFO json_dataset_evaluator.py: 222: ~~ Mean and per-category AP @ IoU=[0.50,0.95] ~~
INFO json_dataset_evaluator.py: 223: 30.4
INFO json_dataset_evaluator.py: 231: 30.4
INFO json_dataset_evaluator.py: 232: ~~ Summary metrics ~~
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.304
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.804
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.100
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.319
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.286
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.377
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.492
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.497
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.510
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.468
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
You guys have any idea of that why I got the AP of value -1 when area=large? This doesn't happen every time, I try to figure out the theory behind this, can you help me solve this problem if you know anything about this?
hey guys, have you defeated this problem?(AP -1)
i got a similar trouble but more bad :(
my problem is AP -1, AP50 -1, AP75 -1, APs -1, APm -1, APl -1
have you any idea with that?
thanks a lot!!!
I trained the model with my custom dataset, 2 classes(including background), the config file is retinanet_R-50-FPN_1X.yaml. And I got the test result like this: INFO json_dataset_evaluator.py: 222:
~~ Mean and per-category AP @ IoU=[0.50,0.95] ~~ INFO json_dataset_evaluator.py: 223: 30.4 INFO json_dataset_evaluator.py: 231: 30.4 INFO json_dataset_evaluator.py: 232:~~ Summary metrics ~~ Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.304 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.804 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.100 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.319 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.286 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.377 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.492 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.497 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.510 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.468 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000You guys have any idea of that why I got the AP of value -1 when area=large? This doesn't happen every time, I try to figure out the theory behind this, can you help me solve this problem if you know anything about this?