Closed kengkeng1 closed 1 year ago
For classification downstream tasks, we randomly select 8 seeds to obtain the best checkpoint. The best checkpoints and logs have been uploaded to Google Drive. Besides, you can use the voting strategy in classification testing to reproduce the performance reported in the paper.
For a quick test, you could run bash scripts/test.sh <GPU> <exp_name> <path/to/best/fine-tuned/model>
The best checkpoints are from Google Drive.
Thank you for explaining patiently. It means that when get a pretain model ,test 250 epoch、275 epoch、300 epoch models 8 times separately for w/o voting strategy result. Then take the best finetune model to get w/ voting strategy result in 8 times?
Yes, that is. Because this is the only way to reproduce the baseline, we also adopt this approach.
Thank you, I get it. And if the pretain-model also train several times or not ?
No, only once.
Thank you for your response!
原来如此啊
Hi,I download your pretain model from the google cloud and finetune on NVIDIA 3090,and achieve the 92.38% result on svm modelnet40 task ,and 94.49% 92.6% 89.62% on the ScanObjectNN task, and the random seed also be set to 0 .
Is it related to the server and Pytorch environment I'm using? Or I need to cancel the setting of the random seed and run multiple times.
Thank you!