Closed amiltonwong closed 3 years ago
Notice that in Figure 4(a), there are two fully connected layers (fc1, fc2 in code) before/after the actual transformer. I think 32, 64, 128, 256, 512 is the dimensions before the first fully connected layer instead of the actual transformer. The dimension of the actual transformer is not given in the paper.
@qq456cvb Thanks a lot for your comments.
Hi, @qq456cvb ,
According to the Fig. 3 in Hengshuang's Point Transformer model, the output feature dimension of each transformer block should be different . e.g. [32, 64, 128, 256, 512]. But your implementation uses a unqiue one, e.g. 512. Any comment on this?
Thanks!