In the original paper on Point Transformer v1, the Transition Up module maps the features of the downsampled point set P1 to its superset P2, and then fuses the features of P2 with those of P1 (the paper describes this as summation, while your code implements it as concatenation).
However, in the code implementation, you actually did not perform the fusion(from the fact that one parameter of 'self.fp' is 'None'). Could this be a reason why Partseg’s accuracy is lower than what is advertised in the paper (see #29)?"
if points1 is not None:
points1 = points1.permute(0, 2, 1)
# new_points = torch.cat([points1, interpolated_points], dim=-1)
new_points = points1 + interpolated_points # This line is my addition
else:
new_points = interpolated_points
In the original paper on Point Transformer v1, the Transition Up module maps the features of the downsampled point set P1 to its superset P2, and then fuses the features of P2 with those of P1 (the paper describes this as summation, while your code implements it as concatenation). However, in the code implementation, you actually did not perform the fusion(from the fact that one parameter of 'self.fp' is 'None'). Could this be a reason why Partseg’s accuracy is lower than what is advertised in the paper (see #29)?"