torch.cuda.is_available() returns False after installing xformers
Command
torch.cuda.is_available()
To Reproduce
I installed 0.0.15.dev337+git.fd21b40 using conda. I'm getting the error WARNING: libc10_cuda.so: cannot open shared object file: No such file or directory Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop and subsequently an error message that cuda is not recognized in pytorch.
Expected behavior
I expected pytorch to work with cuda after installing xformers
Environment
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.17
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: 11.7.99
GPU models and configuration:
GPU 0: Tesla V100-PCIE-16GB
GPU 1: Tesla V100-PCIE-16GB
GPU 2: Tesla V100-PCIE-16GB
GPU 3: Tesla V100-PCIE-16GB
Nvidia driver version: 455.32.00
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
I would recommend that you install PyTorch 1.13.1 (https://pytorch.org/get-started/locally/) with CUDA support, and then install XFormers with pip install -U --pre xformers
🐛 Bug
torch.cuda.is_available() returns
False
after installingxformers
Command
torch.cuda.is_available()
To Reproduce
I installed 0.0.15.dev337+git.fd21b40 using conda. I'm getting the error
WARNING: libc10_cuda.so: cannot open shared object file: No such file or directory Need to compile C++ extensions to get sparse attention support. Please run python setup.py build develop
and subsequently an error message that cuda is not recognized in pytorch.Expected behavior
I expected pytorch to work with cuda after installing xformers
Environment
PyTorch version: 1.12.1 Is debug build: False CUDA used to build PyTorch: Could not collect ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64) GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36) Clang version: Could not collect CMake version: version 2.8.12.2 Libc version: glibc-2.17
Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.17 Is CUDA available: False CUDA runtime version: 11.7.99 GPU models and configuration: GPU 0: Tesla V100-PCIE-16GB GPU 1: Tesla V100-PCIE-16GB GPU 2: Tesla V100-PCIE-16GB GPU 3: Tesla V100-PCIE-16GB
Nvidia driver version: 455.32.00 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True
Versions of relevant libraries: [pip3] memory-efficient-attention-pytorch==0.1.1 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.21.2 [pip3] pytorch-lightning==1.5.10 [pip3] torch==1.12.1 [pip3] torchmetrics==0.7.2 [pip3] torchvision==0.13.1a0 [pip3] vit-pytorch==0.40.2 [conda] blas 1.0 mkl
[conda] cudatoolkit 11.6.0 habf752d_9 nvidia [conda] diffusers-torch 0.11.0 py39hb070fc8_0
[conda] libblas 3.9.0 12_linux64_mkl conda-forge [conda] libcblas 3.9.0 12_linux64_mkl conda-forge [conda] liblapack 3.9.0 12_linux64_mkl conda-forge [conda] liblapacke 3.9.0 12_linux64_mkl conda-forge [conda] memory-efficient-attention-pytorch 0.1.1 pypi_0 pypi [conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.2 py39h20f2e39_0
[conda] numpy-base 1.21.2 py39h79a1101_0
[conda] pytorch 1.12.1 cpu_py39he8d8e81_0
[conda] pytorch-cuda 11.7 h67b0de4_1 pytorch [conda] pytorch-lightning 1.5.10 pyhd8ed1ab_0 conda-forge [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchmetrics 0.7.2 pyhd8ed1ab_0 conda-forge [conda] torchvision 0.13.1 cpu_py39h164cc8f_0
[conda] vit-pytorch 0.40.2 pypi_0 pypi
Additional context