mit-han-lab / torchsparse

[MICRO'23, MLSys'22] TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.
https://torchsparse.mit.edu
MIT License
1.15k stars 131 forks source link

[BUG] Running on a CPU without CUDA compiler #280

Closed tilmantroester closed 4 months ago

tilmantroester commented 6 months ago

Is there an existing issue for this?

Current Behavior

After some small changes to setup.py I managed to install torchsparse on osx arm64 (M1). Some more changes to avoid hardcoded device="cuda:0" and cuda calls in backends.init has test.py finally fail with

AttributeError: module 'torchsparse.backend' has no attribute 'build_kernel_map_subm_hashmap'

Looking at pybind_cpu.cpp and pybind_cuda.cu this is not surprising, since only the CUDA version defines build_kernel_map_subm_hashmap.

Similarly, examples/backbones.py fails with AttributeError: module 'torchsparse.backend' has no attribute 'GPUHashTable', which is only defined in pybind_cuda.cu again.

Can torchsparse be used on a machine without nvcc, considering there doesn't seem to be a CPU implementation for these functions?

Expected Behavior

No response

Environment

- GCC: Apple clang version 14.0.3 (clang-1403.0.22.14.1)
- NVCC: N/A
- PyTorch: 2.1.0
- PyTorch CUDA: N/A
- TorchSparse: 2.1.0

Anything else?

Potentially related to #255.

ys-2020 commented 6 months ago

Hi @tilmantroester . Thank you for your interest in TorchSparse! TorchSparse v2.1 is not designed for CPU-only use cases. If you want to run TorchSparse without nvcc, I would suggest you use TorchSparse v1.4/2.0 instead.

tilmantroester commented 6 months ago

Hi @ys-2020, are there any plans to add a (unoptimised) CPU implementation to allow for local development and testing?

ys-2020 commented 5 months ago

Hi! The optimizations we applied in v2.1 are mostly GPU-oriented. Thus I think starting from v1.4/2.0 might be a better choice for local development and testing for CPU implementation.

ys-2020 commented 4 months ago

Close as completed due to inactivity. If you have further questions, feel free to reopen it.