qizekun / ReCon

[ICML 2023] Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
https://arxiv.org/abs/2302.02318
MIT License
120 stars 13 forks source link

About reproducing the experiment result #15

Closed aHapBean closed 6 months ago

aHapBean commented 6 months ago

Hello, thank you for your great work. I encountered some issues while attempting to reproduce your experiment.

I downloaded your pretrained model from Google Cloud, fine-tuned it on an RTX 3090, and obtained the following results: 93.97% on OBJ_BG, 92.08% on OBJ_ONLY, and 89.97% on PB_T50_RS (without voting, seed = 0). However, I couldn't achieve comparable results to those reported in the paper, which are 95.18%, 93.63%, and 90.63%, respectively.

After reading this issue, I learned about the correct method to reproduce the results. I then attempted using seed 32174, but the results remained the same at 93.97% on OBJ_BG. In general, it seems unlikely that the seed alone would cause such a significant performance difference (e.g., 93.97% in my case vs. 95.18% in your reported results).

Could you please provide guidance on how to accurately reproduce the experiment? Thank you very much.

qizekun commented 6 months ago

Hi, For classification downstream tasks, we randomly select 8 seeds to obtain the best checkpoint. The best checkpoints and logs have been uploaded to Google Drive. Different environments and devices may have slight differences with the same seeds. If you use random seeds to repeat the experiment 8 times, you should be able to achieve comparable or even better results.

aHapBean commented 6 months ago

Thank you for your careful reply! I will try it later.