Thanks for the great job! I have a question about the usage of position embeddings in the transformer encoder.
As shown in the picture, the position embeddings are given for each layer. I am curious about why not only add the position embeddings in the first layer? Thanks.
Thanks for your interest in our work.
We find that adding the position embedding to each layer in the encoder can help the point transformer converenge more stably.
Thanks for the great job! I have a question about the usage of position embeddings in the transformer encoder. As shown in the picture, the position embeddings are given for each layer. I am curious about why not only add the position embeddings in the first layer? Thanks.
Zhimin Chen