Closed anhminh3105 closed 5 years ago
For the test set, COCO dataset does not provide groundtruth. You have to evaluate your result using CodaLab. Please read http://cocodataset.org/#guidelines
Thanks for the guide.
I ran the test with COCO val2017 dataset and reproduced the exact same results as you got.
I will try to upload the test2017 dataset outputs to CodaLab to get the results next.
Thank you.
I ran the
test.py
prorgramme on the COCO test2017 dataset with thehuman_detections_text-dev2017.json
renamed tohuman_detections.json
. However, the output results of all the Average Precision and Average Recall metrics are -1. Here is the snapshot:I am unsure whether the problem is due to the
human_detections_text-dev2017.json
is not the right file to use.Could you please help? Thanks in advance.