facebookresearch / maskrcnn-benchmark

Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.
MIT License
9.29k stars 2.5k forks source link

about evaluation #498

Open alexwq100 opened 5 years ago

alexwq100 commented 5 years ago

❓ Questions and Help

hi,@fmassa I have a question: for a detection task, I trained a model in the abnormal images which have many objects I want to localize. Now for the trained model, I want to test its performance in the normal images which have no objects which I train the model for. In other words, if any object is detected in these normal images, they are called false positives. I made a coco style dataset in which all images are normal images. In more detail,the "annotations" has no elements,like: image and the result shown by the test_net.py is Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000

all the result is -1.000.

Is it right? for the ap ,I think should it be zero?

LeviViana commented 5 years ago

These results are related to the COCOAPI. You will maybe find the answers to your questions here: https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py#L52

It is set up so that if there is no ground-truth boxes, the result is defaulted as -1.

This repo relies on some of the original tools of the COCOAPI, as you can see here: https://github.com/facebookresearch/maskrcnn-benchmark/blob/13b4f82efd953276b24ce01f0fd1cd08f94fbaf8/maskrcnn_benchmark/data/datasets/evaluation/coco/coco_eval.py#L319