Open kamste opened 1 year ago
Im sorry for late reply.
we haven`t considered the cpu enviornment, but you can simply run on the cpu by modifying our codes like...
from
a = a.cuda()
b = torch.tensor([1., 2.], device="cuda")
to
a = a.cpu()
b = torch.tensor([1., 2.], device="cpu")
It would take a long time to do it yourself, but it's possible... I sorry for not being more helpful.
Hello, thank you for the reply. The python part of the project is one thing and can be done like you mentioned. The issue is with rewriting CUDA kernels from pointops in external_libs folder. This functions are working only on specific type of GPU Nvidia series. Even if python is modified like you presented above there is still part in C++/C that is working only on GPU. My question is: how to modify the project so there is no need to use cuda kernels (like sampling_cuda_kernel.cu, knnquery_cuda_kernel.cu ... etc. ) and have this part of the project run on CPU during inference.
As you say,, it seems that the project does not support cpu.
Im planning to modify the coda kernels c++ library to PyTorch, which can be executed on cpu. However, I
m not sure when it will be fixed.
Any one has fixed the project to support CPU ?
How to modify project to be able to do predictions on CPU (on machine without GPU or pytorch-cuda configuration)? Please help