lucidrains / point-transformer-pytorch

Implementation of the Point Transformer layer, in Pytorch
MIT License
592 stars 58 forks source link

The layer structure and mask #9

Open ayushais opened 3 years ago

ayushais commented 3 years ago

Hi,

Thanks for this contribution. In the implementation of attn_mlp the first linear layer increases the dimension. Is this a standard practice because I did not find any details about this in the paper. Also paper also does not describe use of mask, is this again some standard practice for attention layers?

Thanks!!

toannguyen1904 commented 2 years ago

I think the mask is used in some cases similar to Transformer in NLP, if you need it. If you don't have any special purposes, just set the mask to all ones.