NVlabs / DREAM

DREAM: Deep Robot-to-Camera Extrinsics for Articulated Manipulators (ICRA 2020)
Other
145 stars 31 forks source link

why the pre-trained model panda_dream_vgg_f is not as good as the paper #19

Closed Jerrrrry-Zhu closed 2 years ago

Jerrrrry-Zhu commented 2 years ago

This is a very fantasy work, but I downloaded the pre-trained model -panda_dream_vgg_f.pth -panda_dream_vgg_f.yaml and data - panda-3cam_azure on google drive, when I ran script/analyze_training.py , I found that the test result has an AUC of 0.68804 for PCK in the panda-3cam_azure data, not in the paper 0.751, can you tell me why? Thank you!

tabula-rosa commented 2 years ago

Hello, and thank you for your interest in our work!

As we mention in our README under "Note on reproducing results", if you are interested in reproducibility, you'll want to use the "shrink-and-crop" image preprocessing option when doing network inference. This is enabled by passing the argument -p shrink-and-crop to the network_inference_dataset.py script --- please use this script and not analyze_training.py, as analyze_training.py does not have a command line argument for it. Here is an example:

python scripts/network_inference_dataset.py -i trained_models/panda_dream_vgg_f.pth -d data/real/panda-3cam_azure/ -o temp/panda_dream_vgg_f_azure_snc -p shrink-and-crop

I receive a AUC of 0.74404 when using this combination of network and dataset, which is pretty similar to what we published in the paper.

The reason for using this different image preprocessing option is because we switched the default to resize for the open-source release to allow for keypoints at the edge of the image frame to be detected. The networks were originally trained using the shrink-and-crop option. In our testing for considering this change, we didn't see much difference, but it appears that this is indeed a case where there is a fairly sizable difference. Thank you for pointing this out.

Hope that helps and please let me know if you have any additional questions!

Jerrrrry-Zhu commented 2 years ago

Thank you for your reply!