princeton-vl / CornerNet-Lite

BSD 3-Clause "New" or "Revised" License
1.78k stars 431 forks source link

wrong with test #76

Open wudi00 opened 5 years ago

wudi00 commented 5 years ago

I used python evaluate.py CornerNet_Saccade --testiter 500000 --split testing to test coco datasets, but the result likes this: Accumulating evaluation results... DONE (t=12.17s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 why? Did I do anything wrong with the steps?

SeeeeShiwei commented 5 years ago

I met this problem too.

SeeeeShiwei commented 5 years ago

Any one solved it~~~?

heilaw commented 5 years ago

COCO does not provide annotation for the test set. That's why the COCO evaluation API gives -1, when you are evaluating the detector on the test set. You need to submit the result JSON file to COCO evaluation server. The JSON file can be found in results/<config>/<iter>/testing.

wudi00 commented 5 years ago

@heilaw Ok, thanks.Now I want to train my own dataset, I format the data to the format coco, then which files and parameters should I modify?

float4189 commented 5 years ago

COCO does not provide annotation for the test set. That's why the COCO evaluation API gives -1, when you are evaluating the detector on the test set. You need to submit the result JSON file to COCO evaluation server. The JSON file can be found in results/<config>/<iter>/testing.

I still don't understand very well. Can you say something specific? Thank you very much.

float4189 commented 5 years ago

I used python evaluate.py CornerNet_Saccade --testiter 500000 --split testing to test coco datasets, but the result likes this: Accumulating evaluation results... DONE (t=12.17s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 why? Did I do anything wrong with the steps?

Can you tell me how to do it?

float4189 commented 5 years ago

Any one solved it~~~?

Have you solved it?