qizekun / ReCon

[ICML 2023] Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining
https://arxiv.org/abs/2302.02318
MIT License
116 stars 13 forks source link

how to pretrain the Point-MAE† (not the Point-MAE)in the Table 1, and the architecture of the Point-MAE† #4

Closed TangYuan96 closed 1 year ago

TangYuan96 commented 1 year ago

Firstly , thanks for sharing the outstanding work!!!

Could you help me to know how to pretrain the Point-MAE† (not the Point-MAE)in the Table 1, and the architecture of the Point-MAE† ?

Look forword your reply, thanks!

qizekun commented 1 year ago

Thanks for your attention. The Point-MAE† refers to using the ReCon block for pre-training but without using contrastive learning loss. To achieve this effect, you can simply set the 'img_encoder', 'text_encoder', and 'self_contrastive' fields to FALSE in the base.yaml file.

TangYuan96 commented 1 year ago

thanks for your careful reply