Julie-tang00 / Point-BERT

[CVPR 2022] Pre-Training 3D Point Cloud Transformers with Masked Point Modeling
MIT License
542 stars 65 forks source link

Question about the position embedings #11

Closed Zhimin-C closed 2 years ago

Zhimin-C commented 2 years ago

Thanks for the great job! I have a question about the usage of position embeddings in the transformer encoder. Screenshot from 2021-12-25 22-42-58 As shown in the picture, the position embeddings are given for each layer. I am curious about why not only add the position embeddings in the first layer? Thanks.

Zhimin Chen

yuxumin commented 2 years ago

Hi, Zhimin.

Thanks for your interest in our work. We find that adding the position embedding to each layer in the encoder can help the point transformer converenge more stably.

Best!