qizekun / ReCon

[ICML 2023] Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
https://arxiv.org/abs/2302.02318
MIT License
120 stars 13 forks source link

about experiment result #6

Closed kengkeng1 closed 1 year ago

kengkeng1 commented 1 year ago

Hi,I download your pretain model from the google cloud and finetune on NVIDIA 3090,and achieve the 92.38% result on svm modelnet40 task ,and 94.49% 92.6% 89.62% on the ScanObjectNN task, and the random seed also be set to 0 .

Is it related to the server and Pytorch environment I'm using? Or I need to cancel the setting of the random seed and run multiple times.

Thank you!

qizekun commented 1 year ago

For classification downstream tasks, we randomly select 8 seeds to obtain the best checkpoint. The best checkpoints and logs have been uploaded to Google Drive. Besides, you can use the voting strategy in classification testing to reproduce the performance reported in the paper.

For a quick test, you could run bash scripts/test.sh <GPU> <exp_name> <path/to/best/fine-tuned/model> The best checkpoints are from Google Drive.

kengkeng1 commented 1 year ago

Thank you for explaining patiently. It means that when get a pretain model ,test 250 epoch、275 epoch、300 epoch models 8 times separately for w/o voting strategy result. Then take the best finetune model to get w/ voting strategy result in 8 times?

qizekun commented 1 year ago

Yes, that is. Because this is the only way to reproduce the baseline, we also adopt this approach.

kengkeng1 commented 1 year ago

Thank you, I get it. And if the pretain-model also train several times or not ?

qizekun commented 1 year ago

No, only once.

kengkeng1 commented 1 year ago

Thank you for your response!

Amazingren commented 4 months ago

原来如此啊