Pointcept / PointTransformerV3

[CVPR'24 Oral] Official repository of Point Transformer V3 (PTv3)
MIT License
764 stars 44 forks source link

changing batch size #100

Open bruceZHU08 opened 4 days ago

bruceZHU08 commented 4 days ago

Hi authors, When I set cls_mode to True, I found that the output of the encoder has a varying batch size. For example, the point.feat.shape of the input of the encoder is (20*1024, 32), and the point.feat.shape of the output of the encoder becomes (107, 512), I have no idea why it gives me a 107, it should be equal to the batch size which is 20. And every time I run the experiment, this value can be different, sometimes it equals to 20, but sometimes it equals to some other random values like 107.

Gofinge commented 4 days ago

Hey, check the post-processing logic here (https://github.com/Pointcept/Pointcept/blob/main/pointcept/models/default.py#L109)

We move global max pooling to the Default Classifier for the latest backbone and choose to keep Point to preserve more information. The cls mode is actually encoder only mode. Sorry for confusing you, I will make some consistent changes in the next version at the end of the year.