facebookresearch / PointContrast

Code for paper <PointContrast: Unsupervised Pretraining for 3D Point Cloud Understanding>
MIT License
328 stars 33 forks source link

Do we need to use torch.no_grad() to frozen the pretrained weights for finetune? #14

Open DiegoWangSys opened 3 years ago

DiegoWangSys commented 3 years ago

Dear Xie: Do we need to frozen the pretrained weights and only change the weights of final Linear Classifier Layer for finetune?

Best !