Closed auniquesun closed 1 year ago
Hi @auniquesun, By default, we didn't integrate the finetune codes in our repo, since it will make the code over-complicated, and also because we don't change any fine-tune codes from their original repos; just use our pre-train framework to get the pre-trained weights then take the weights to their original repos then initialize and finetune on it, typically you might need to tune the hyper-parameters like lr since you are not training from scratch anymore. here are the repos we take for the fine-tuning tasks: https://github.com/guochengqian/PointNeXt https://github.com/lulutang0608/Point-BERT https://github.com/ma-xu/pointMLP-pytorch https://github.com/yanx27/Pointnet_Pointnet2_pytorch
Thanks, I will try them out.
According to my understanding, the results of Table 1 and 2 in the paper are finetuned on ScanObjectNN and ModelNet40. The current version provides the codes of pretraining and zero-shot evaluation on downstream datasets.
So could you release the implementations of finetuing on these two downstream datasets, including the definition of ScanObjectNN dataset?
Best Regards.