Closed TangYuan96 closed 1 year ago
Thanks for your attention. The Point-MAE† refers to using the ReCon block for pre-training but without using contrastive learning loss. To achieve this effect, you can simply set the 'img_encoder', 'text_encoder', and 'self_contrastive' fields to FALSE in the base.yaml file.
thanks for your careful reply
Firstly , thanks for sharing the outstanding work!!!
Could you help me to know how to pretrain the Point-MAE† (not the Point-MAE)in the Table 1, and the architecture of the Point-MAE† ?
Look forword your reply, thanks!