Pang-Yatian / Point-MAE

[ECCV2022] Masked Autoencoders for Point Cloud Self-supervised Learning
MIT License
448 stars 55 forks source link

How to change embedding module output size? #24

Closed DanieleMenchetti closed 2 years ago

DanieleMenchetti commented 2 years ago

Hi, thank you for your work on point clouds. Could I ask you how to change the embedding module output size? I set encoder_dims and transf_dim to 512, but I'm getting a shape error on attention class (image below). Is there anything else that I should edit?

error

Looking for your reply, Daniele

Pang-Yatian commented 2 years ago

Hi,

It is caused by the multi-head attention mechanism. 512/6=85.3333, 512//6 = 85. You may need to make sure dim can be divided by num-heads.

DanieleMenchetti commented 2 years ago

Hi,

It is caused by the multi-head attention mechanism. 512/6=85.3333, 512//6 = 85. You may need to make sure dim can be divided by num-heads.

Thank you so much!