rasmushaugaard / surfemb

SurfEmb (CVPR 2022)
https://surfemb.github.io/
MIT License
77 stars 17 forks source link

scores in resulst and bop19_average_recall #5

Closed cats0212 closed 2 years ago

cats0212 commented 2 years ago

Question 1:I used ycbv-jwpvdij1.compact.ckpt(a trained model that you provided) to infer test datasets in ycbv(python -m surfemb.scripts.infer), then python -m surfemb.scripts.misc.format_results_for_eval, the score in results all is negative, for example,-0.339 , -0.401.Is that normal? image A:scene_id B:img_id C:est_obj_id D: score.

cats0212 commented 2 years ago

Question 2:Through your trained model,I get the file ycbv-jwpvdij1-refine-pose-score_ycbv-test.csv. Then python ./bop_toolkit/bop_toolkit_lib/eval_bop19.py get a result { "bop19_average_recall": 0.5618441264451451, "bop19_average_recall_mspd": 0.7611933058452582, "bop19_average_recall_mssd": 0.4954887218045113, "bop19_average_recall_vsd": 0.42885035168566576, "bop19_average_time_per_image": 4.59347986459732 } But average_recall(ycbv) is 0.653 in your paper.Is there something wrong with me?thank you

cats0212 commented 2 years ago

Question 3: ycbv-jwpvdij1.compact.ckpt,the .ckpt is trained purely on synthetic data?or on real data for ycbv.

rasmushaugaard commented 2 years ago

1) The scores are log-likelihoods, so they're supposed to be negative.

2/3) The released models are those used for the BOP evaluation, only trained on the synthetic PBR images. I just re-ran inference with the ycbv-jwpvdij1.compact.ckpt model and got 0.645, similar to the 0.647 published on BOP.

Based on the logs from the other issues, you've opened, it looks like you're using python3.7, which in itself shouldn't be an issue, but the conda environment I have published is with python3.8. Have you tried that environment?

cats0212 commented 2 years ago

ok,thank you,I will try it.