Open boss-create opened 2 weeks ago
It seems to be an issue when putting WSL + Nvidia driver + docker together. Can you confirm: can you run CUDA with torch outside docker? To my best (but vague) knowledge of WSL, you may need WSL2 to have the Nvidia driver working properly.
It seems to be an issue when putting WSL + Nvidia driver + docker together. Can you confirm: can you run CUDA with torch outside docker? To my best (but vague) knowledge of WSL, you may need WSL2 to have the Nvidia driver working properly.
Thank you for your reply. The WSL I mentioned above is WSL2, and I have also confirmed that CUDA can be used normally in WSL2-Ubuntu22.04 without using docker. I have used other docker environments in WSL2 before and did not encounter similar problems. At the same time, I also tried WSL2-Ubuntu20.04 to re-pull different versions of the docker images and the same problem occurred...
Would you mind checking https://github.com/microsoft/WSL/issues/5663
And some StackExchange guys said the error is harmless https://superuser.com/questions/1707681/wsl-libcuda-is-not-a-symbolic-link
In WSL, When installing using the recommended docker image: docker pull runzhongwang/thinkmatch:torch1.6.0-cuda10.1-cudnn7-pyg1.6.3-pygmtools0.5.1
torch.cuda is unavailable and there are problems with symbolic links:![image](https://github.com/Thinklab-SJTU/ThinkMatch/assets/70477612/e4d9a879-9c9d-4419-9f5e-a60f2fbb1555)
I also tried other different versions, and the above problems still occurred