Closed theycallmefm closed 1 year ago
Hi, hope the following demo code will be helpful:
import torch
# simulate traditional coord style [B, N, 3]
B = 2
N = 100
C = 3
coord = torch.randn(B, N, C)
# shift to PCR style [B * N, C]
coord = coord.reshape([-1, C])
offset = ((torch.arange(B) + 1) * N).int()
Hello, thank you for your fast response. Turns out I was forgetting to add +1 to offset! I have a followup question but it may be unrelated, right now I am having this error in your block's self.fc1
File "/mmfs1/data/algan/Documents/SABER/models/point_transformer.py", line 311, in forward
features = self.block((coord, feat, offset), reference_index)[1]
File "/mmfs1/data/algan/.conda/envs/pcr2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/mmfs1/data/algan/Documents/SABER/models/point_transformer.py", line 158, in forward
feat = self.fc1(feat)
File "/mmfs1/data/algan/.conda/envs/pcr2/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/mmfs1/data/algan/.conda/envs/pcr2/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
File "/mmfs1/data/algan/.conda/envs/pcr2/lib/python3.9/site-packages/torch/nn/functional.py", line 1848, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
Do you know what may be the issue? I checked around the internet but I couldn't find a proper answer. (I tried reducing batch size it didn't work out) Your repository works totally fine in this environment setup btw.
Did you reshape coord to [B * N, C]?
Okay I was able to pin the where the issue is and solve it. It is not related to shape of the coord. Here is a simple spinnet that doesn't work. It was about knn_query
function
B, C, N = features.shape
coord = coord.reshape([B*N, 3])
features = features.reshape([B*N,C])
offset = ((torch.arange(B) + 1) * N).int()
reference_index, _ = pointops.knn_query(self.neighbours, coord , offset)
features = self.block((coord,features,offset), reference_index)[1]
The problem was offset
wasn't on cuda device, I changed it to
offset = ((torch.arange(B,device=features.device) + 1) * N).int()
Now everything works correctly. Thank you so much!
Hi, first of all thank you for your great work! I want to use PTv2's block for the feature extraction in my project. However I don't want to change my dataset code if it is possible since I work on Pytorch Lightning. So let's assume I want to work with batch size of
B
. Every point cloud has same number ofN
points.As you explained your input data has this shape:
coord.shape: [B*N,3], offset: [N, 2N, .. , B * N]
whereas my input data is like this:coord.shape: [B,N,3]
I assume we need offset since we use it in some functions like
knn_query, GridPool
etc. Is there a work around to use these with traditional dataset itemization?