Closed AH-Merii closed 8 months ago
@AH-Merii you might need to install libcudnn8
since it's not found. apt install libcudnn8
on Ubuntu will fix that!
I was facing the same issue and after installing libcudnn8 it worked. Notice the last line:
tux@ubuntu:~$ python3
Python 3.8.10 (default, Nov 26 2021, 20:14:08)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import tensorflow as tf
>>> tf.config.list_physical_devices('GPU')
2022-02-27 18:09:22.945826: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:922] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-02-27 18:09:22.949805: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:922] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-02-27 18:09:22.950096: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:922] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
There's a NUMA warning because on WSL kernel CONFIG_NUMA is not set
. I'll build a new WSL kernel with NUMA enabled to see if those warnings will go away.
This is from inside Python:
tux@ubuntu:~$ python3
Python 3.8.10 (default, Nov 26 2021, 20:14:08)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.5.1+cu101'
>>> torch.cuda.is_available()
True
>>>
@elsaco , Just installed libcudnn8
, I am still unable to detect the GPU.
@AH-Merii Try installing the dev version of cudnn too. sudo apt install libcudnn8-dev
@AH-Merii did you solve this problem? i facing the same one as yours
This issue has been automatically closed since it has not had any activity for the past year. If you're still experiencing this issue please re-file this as a new issue or feature request.
Thank you!
Version
Microsoft Windows [Version 10.0.22000.527]
WSL Version
Kernel Version
5.10.16
Distro Version
Ubuntu 20.04
Other Software
PyTorch '1.10.2+cu113'
Repro Steps
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
tf.test.is_gpu_available()
Expected Behavior
Expected behaviour is for
torch.cuda.is_available()
andtf.test.is_gpu_available()
to returnTrue
Actual Behavior
torch.cuda.is_available()
andtf.test.is_gpu_available()
returnFalse
insteadDiagnostic Logs
Nvidia SMI
Nvidia Cuda Toolkit Version
Running Blackscholes Test
Running PyTorch
torch.cuda.is_available
Running Tensorflow
tf.test.is_gpu_available()
Running PyTorch collect_env.py
Running Natively on Windows 11