ma-xu / pointMLP-pytorch

[ICLR 2022 poster] Official PyTorch implementation of "Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework"
Apache License 2.0
501 stars 65 forks source link

Maxpooling not performed within each stage? #88

Closed thatgeeman closed 1 year ago

thatgeeman commented 1 year ago

Hello,

Congrats on this cool work! My doubt is about why the maxpool operation takes place outside of the for loop in the forward pass of the model defined in the referenced line here:

https://github.com/ma-xu/pointMLP-pytorch/blob/3e3d80cff5c23a631fe5ba4ca97db3f452893ed2/classification_ModelNet40/models/pointmlp.py#L336-L342

Specifically, it seems that the aggregation doesnt take place, $\phi{pos}(\phi{pre}(f_{i,j}))$.

I would have expected to do this instead according to how its defined in the paper $\phi{pos}(A(\phi{pre}(f_{i,j})))$:

def forward(self, ...):
.
.
        for i in range(self.stages): 
            xyz, x = self.local_grouper_list[i](xyz, x.permute(0, 2, 1))  
            x = self.pre_blocks_list[i](x)  
            x = F.adaptive_max_pool1d(x, 1).squeeze(dim=-1)  # pooling inside the loop after pre block
            x = self.pos_blocks_list[i](x) 
.
.

Thanks for clarifying this.

thatgeeman commented 1 year ago

Ah, just saw that its done within the Pre block: https://github.com/ma-xu/pointMLP-pytorch/blob/3e3d80cff5c23a631fe5ba4ca97db3f452893ed2/classification_ModelNet40/models/pointmlp.py#L256

Thanks!