Open weders opened 3 years ago
It uses torch.cuda.is_available() to determine whether to use cuda compilation. Make sure the above command gets True.
Hello @weders.. I have the same issue. Could you resolve it ?
You have to adjust the setup.py
file such that it does not use torch.cuda.is_available()
as the GPU is not exposed to the docker during docker build .
The force_cuda
flag should do the trick. You can find it here.
Thanks for quick response @weders. I am not using docker.
I cloned the repository and directly ran the setup.py using
"python setup.py install --blas_include_dirs=${CONDA_PREFIX}/include --blas=openblas --force_cuda"
The problem still occurs though i used the force_cuda
flag. I tried commenting the lines with torch.cuda.is_available()
but found no luck.
torch.cuda.is_available()
returns True
if you run it on your machine?
Something I also realized is that the CUDA versions have to perfectly match between PyTorch installation and CUDA Toolkit.
torch.cuda.is_available()
was returning False
in my virtual environment.
Matching the CUDA versions resolved the issue.
@weders Thanks for your help :)
Hi!
I would like to build a Docker container for the MinkowskiEngine. However, if I exactly follow the process described in the wiki, ME is only built for CPU support.
Afterwards, I tried to install it using --install-option="--force_cuda". Unfortunately, this led to another error
File "/usr/local/lib/python3.7/dist-packages/torch/utils/cpp_extension.py", line 1562, in _get_cuda_arch_flags arch_list[-1] += '+PTX' IndexError: list index out of range
Any idea what the problem could be?