Congrats on this cool work! My doubt is about why the maxpool operation takes place outside of the for loop in the forward pass of the model defined in the referenced line here:
Specifically, it seems that the aggregation doesnt take place, $\phi{pos}(\phi{pre}(f_{i,j}))$.
I would have expected to do this instead according to how its defined in the paper $\phi{pos}(A(\phi{pre}(f_{i,j})))$:
def forward(self, ...):
.
.
for i in range(self.stages):
xyz, x = self.local_grouper_list[i](xyz, x.permute(0, 2, 1))
x = self.pre_blocks_list[i](x)
x = F.adaptive_max_pool1d(x, 1).squeeze(dim=-1) # pooling inside the loop after pre block
x = self.pos_blocks_list[i](x)
.
.
Hello,
Congrats on this cool work! My doubt is about why the maxpool operation takes place outside of the for loop in the forward pass of the model defined in the referenced line here:
https://github.com/ma-xu/pointMLP-pytorch/blob/3e3d80cff5c23a631fe5ba4ca97db3f452893ed2/classification_ModelNet40/models/pointmlp.py#L336-L342
Specifically, it seems that the aggregation doesnt take place, $\phi{pos}(\phi{pre}(f_{i,j}))$.
I would have expected to do this instead according to how its defined in the paper $\phi{pos}(A(\phi{pre}(f_{i,j})))$:
Thanks for clarifying this.