facebookresearch / ContrastiveSceneContexts

Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"
MIT License
221 stars 28 forks source link

Pre-trained models #10

Closed yaping222 closed 3 years ago

yaping222 commented 3 years ago

Hi, thank you very much for your great codes and detailed explanation! I have some questions about pre-trained models. I want to use the pre-trained model obtained from unsupervised learning and fine-tune the pre-trained model on my own dataset for semantic segmentation. You provide 'Initialization' and 'Pre-trained Model' for all your experiments. From my understanding, your limited annotation training starts from a pre-trained network, and that means the 'Initialization' should be the pre-trained model. If so, what are 'Pre-trained Model'? Which model should I use for my fine-tuning? I'm looking forward to your reply.

Sekunde commented 3 years ago

There is some confusion in the naming fashion but you understand correctly. 'Pre-trained Model' refers to the saved model that gets the number in the downstream tasks. "Initialization" is the mode obtained from unsupervised learning.

s9xie commented 3 years ago

The readme is now updated with clearer naming. Closing the task.

yaping222 commented 3 years ago

Hi Sekunde and s9xie , thank you very much for your explanation and update!