Closed taeyeopl closed 1 week ago
Also, different results on evluate_object_detection.py
python examples/evaluate_object_detection.py
- Evaluating Object Detection results...
loading annotations into memory...
Done (t=0.16s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.02s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=7.72s).
Accumulating evaluation results...
DONE (t=1.42s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.014
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.014
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.014
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.004
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.013
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.010
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.017
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.017
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.017
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.010
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.016
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.010
Metric Value
AP 0.013785
AP50 0.014216
AP75 0.013975
APs 0.004125
APm 0.013374
APl 0.009618
AP: 0.014 | AP_50: 0.014 | AP_75: 0.014 | AP_s: 0.004 | AP_m: 0.013 | AP_l: 0.010
- Evaluation Done...
Could you provide full results?
Thank you for your interest in the HOCap dataset!
The evaluation results may differ from the example listed in the README file because only a limited set of results is provided in the demo files.
Currently, the full results are not accessible as they exceed upload size limits. However, as long as your testing results are saved in the same format shown in the demo file, you can calculate the evaluation results using the example scripts.
Hi, I tested the
evaluate_object_pose.py
and got these results instead of the example.Could you check your results, scripts, or JSON results?