mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.
https://torchsparse.mit.edu
MIT License
1.19k stars 138 forks source link

[Feature Request] Generate Conv3D results on another set of coordinates? #309

Closed Tortoise0Knight closed 2 months ago

Tortoise0Knight commented 4 months ago

Is there an existing issue for this?

Current Behavior

Does TorchSparse support generate Conv3D() results on another set of coordinates? Which means calling forward(input, coords), and the output is a sparse tensor with the same coordinates as coords, but the features are computed with corresponding features on input.

francotheengineer commented 2 months ago

I recommend doing this:

x_A = torchsparse.SparseTensor(feats=feats, coords=coords_A)
x_B = torchsparse.SparseTensor(feats=feats, coords=coords_B)
my_conv_layer = nn.conv(...)
y = my_conv_layer(x_A)
y.C = x_B.C
Tortoise0Knight commented 1 month ago

I recommend doing this:

x_A = torchsparse.SparseTensor(feats=feats, coords=coords_A)
x_B = torchsparse.SparseTensor(feats=feats, coords=coords_B)
my_conv_layer = nn.conv(...)
y = my_conv_layer(x_A)
y.C = x_B.C

Thank you for answering. But maybe I havn't describe my problem clearly enough. In my case, for each coordinates $C_B^i$ in x_B, I need to find correspoinding points in x_A within the kernel size and centerd in $C_B^i$. Then use their feats to compute with kernel, and the results stored in $F_B^i$. Is this possible? You code seems to simply change the coordinates of the result on x_A though.

francotheengineer commented 1 month ago

I recommend doing this:

x_A = torchsparse.SparseTensor(feats=feats, coords=coords_A)
x_B = torchsparse.SparseTensor(feats=feats, coords=coords_B)
my_conv_layer = nn.conv(...)
y = my_conv_layer(x_A)
y.C = x_B.C

Thank you for answering. But maybe I havn't describe my problem clearly enough. In my case, for each coordinates C B i in x_B, I need to find correspoinding points in x_A within the kernel size and centerd in C B i . Then use their feats to compute with kernel, and the results stored in F B i . Is this possible? You code seems to simply change the coordinates of the result on x_A though.

Is x_A.C == x_B.C ?

Tortoise0Knight commented 1 month ago

I recommend doing this:

x_A = torchsparse.SparseTensor(feats=feats, coords=coords_A)
x_B = torchsparse.SparseTensor(feats=feats, coords=coords_B)
my_conv_layer = nn.conv(...)
y = my_conv_layer(x_A)
y.C = x_B.C

Thank you for answering. But maybe I havn't describe my problem clearly enough. In my case, for each coordinates C B i in x_B, I need to find correspoinding points in x_A within the kernel size and centerd in C B i . Then use their feats to compute with kernel, and the results stored in F B i . Is this possible? You code seems to simply change the coordinates of the result on x_A though.

Is x_A.C == x_B.C ?

No. They are different coordinates. The purpose of this operation is to transfer the attributes of A to another sets of coordinates B.

francotheengineer commented 1 month ago

@Tortoise0Knight Ok I think this might help?

x_A = torchsparse.SparseTensor(feats=feats, coords=coords_A)
my_conv_layer = nn.conv(...)
y = my_conv_layer(x_A)

#  Apply the attributes of arbitrary tensors y, onto a tensor with coords_B.
X_B = torchsparse.SparseTensor(feats=y.F, coords=coords_B)
Tortoise0Knight commented 1 month ago

@Tortoise0Knight Ok I think this might help?

x_A = torchsparse.SparseTensor(feats=feats, coords=coords_A)
my_conv_layer = nn.conv(...)
y = my_conv_layer(x_A)

#  Apply the attributes of arbitrary tensors y, onto a tensor with coords_B.
X_B = torchsparse.SparseTensor(feats=y.F, coords=coords_B)

Sorry. I think your solution is still not what I want. E.g. the y.F possibly not have the same number of points to coords_B. In that case, we can not create the result tensor. I think this problem needs implementation from the Engine, not user code. For example, https://nvidia.github.io/MinkowskiEngine/convolution.html#MinkowskiEngine.MinkowskiConvolution.forward

francotheengineer commented 1 month ago

@Tortoise0Knight I don't understand how you can apply features to a difference number of points. Each point needs to have a feature. Is there another library function that does this?

lverret commented 1 month ago

I think this request refers to the ability to apply a convolution to points that are different from the points of the features. As far as I understand, torchsparse only supports applying convolutions centered at a set of points x_A using features at x_A (f_A) (red). But one might be interested in sparse convolutions centered at other points x_B but still using f_A (blue), creating new features for those points:

spcnn

I don't know if other libraries do this.

Tortoise0Knight commented 1 month ago

I think this request refers to the ability to apply a convolution to points that are different from the points of the features. As far as I understand, torchsparse only supports applying convolutions centered at a set of points x_A using features at x_A (f_A) (red). But one might be interested in sparse convolutions centered at other points x_B but still using f_A (blue), creating new features for those points:

spcnn

I don't know if other libraries do this.

Thank you for clear explanation of what I mean. Minkowski Engine actually can do this. In calling https://nvidia.github.io/MinkowskiEngine/convolution.html#MinkowskiEngine.MinkowskiConvolution.forward, you can give coordinates. If not given, the coordinates of input is used (this is the most common case).

The ability to apply convolution on differents sets of coordinates is very useful. It can be used as a NN replacement for traditional feature extraction like KNN, as I tested on Minkowski Engine.

francotheengineer commented 1 month ago

@Tortoise0Knight

Thanks for detailing this. I recommend you do some MinkowskiEngine layer, inside your TorchSparse model. You will have to swap between MinkowskiEngine sparse tensors and TorchSparse sparse tensor, but it will work. I've done this before and the speed is pretty good.