qq456cvb / Point-Transformers

Point Transformers
MIT License
609 stars 102 forks source link

question about output feature dimension #6

Closed amiltonwong closed 3 years ago

amiltonwong commented 3 years ago

Hi, @qq456cvb ,

According to the Fig. 3 in Hengshuang's Point Transformer model, the output feature dimension of each transformer block should be different . e.g. [32, 64, 128, 256, 512]. But your implementation uses a unqiue one, e.g. 512. Any comment on this?

Thanks!

qq456cvb commented 3 years ago

Notice that in Figure 4(a), there are two fully connected layers (fc1, fc2 in code) before/after the actual transformer. I think 32, 64, 128, 256, 512 is the dimensions before the first fully connected layer instead of the actual transformer. The dimension of the actual transformer is not given in the paper.

amiltonwong commented 3 years ago

@qq456cvb Thanks a lot for your comments.