IRVLUTD / HO-Cap

A Python package that provides evaluation and visualization tools for the HOCap dataset
https://irvlutd.github.io/HOCap
GNU General Public License v3.0
5 stars 1 forks source link

No files evaluate_novel_object_pose.py #10

Closed taeyeopl closed 1 week ago

taeyeopl commented 2 weeks ago

Hi, I tested the evaluate_object_pose.py and got these results instead of the example.

Could you check your results, scripts, or JSON results?

Object_ID  ADD-S_err (cm)  ADD_err (cm)  ADD-S_AUC (%)  ADD_AUC (%)
    G01_1        0.405351      0.641204      95.968094    93.623674
    G01_2        0.401109      0.694896      96.054785    93.199231
    G01_3        0.430676      1.075575      95.758337    89.407461
    G01_4        0.477267      1.670412      95.284479    83.752379
  Average        0.477267      1.670412      95.766424    89.995686
taeyeopl commented 2 weeks ago

Also, different results on evluate_object_detection.py


python examples/evaluate_object_detection.py
- Evaluating Object Detection results...
loading annotations into memory...
Done (t=0.16s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.02s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=7.72s).
Accumulating evaluation results...
DONE (t=1.42s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.014
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.014
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.014
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.004
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.013
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.010
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.017
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.017
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.017
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.010
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.016
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.010
Metric    Value
    AP 0.013785
  AP50 0.014216
  AP75 0.013975
   APs 0.004125
   APm 0.013374
   APl 0.009618
AP: 0.014 | AP_50: 0.014 | AP_75: 0.014 | AP_s: 0.004 | AP_m: 0.013 | AP_l: 0.010
- Evaluation Done...
taeyeopl commented 2 weeks ago

Could you provide full results?

gobanana520 commented 2 weeks ago

Thank you for your interest in the HOCap dataset!

The evaluation results may differ from the example listed in the README file because only a limited set of results is provided in the demo files.

Currently, the full results are not accessible as they exceed upload size limits. However, as long as your testing results are saved in the same format shown in the demo file, you can calculate the evaluation results using the example scripts.