facebookresearch / DepthContrast

DepthContrast self-supervised learning for 3D
Other
267 stars 35 forks source link

Pre-train with Adam optimizer #27

Closed yichen928 closed 3 years ago

yichen928 commented 3 years ago

Hello, thank you for this great work.

I find you apply SGD+momentum optimizer to the pre-training, but other optimizers are used for fine-tuning. I would like to know that have you tried other optimizers such as Adam, AdamW and LARS for pre-training? And will other choices lead to worse performance of pre-training?

Thank you very much.

zaiweizhang commented 3 years ago

No. I have not tried Adam, AdamW and LARS. We choose SGD + momentum because MoCo is using that and we use MoCo loss formulation for our framework.