Open k-bingcai opened 9 months ago
Hi @k-bingcai ,
I was not able to reproduce the issue. It might be an uncompatibility between the cuda version and torch.
Can you post your sessionInfo()
? As well as your cuda version?
Hi @dfalbel,
Thanks for getting back! Here's my sessionInfo()
:
R version 4.3.1 (2023-06-16)
Platform: x86_64-conda-linux-gnu (64-bit)
Running under: Red Hat Enterprise Linux 8.8 (Ootpa)
Matrix products: default
BLAS/LAPACK: /nas/longleaf/home/bingcai/anaconda3/envs/multidfm/lib/libopenblasp-r0.3.21.so; LAPACK version 3.9.0
locale:
[1] LC_CTYPE=en_US.utf-8 LC_NUMERIC=C
[3] LC_TIME=en_US.utf-8 LC_COLLATE=en_US.utf-8
[5] LC_MONETARY=en_US.utf-8 LC_MESSAGES=en_US.utf-8
[7] LC_PAPER=en_US.utf-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.utf-8 LC_IDENTIFICATION=C
time zone: America/New_York
tzcode source: system (glibc)
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] torch_0.12.0
loaded via a namespace (and not attached):
[1] processx_3.8.2 bit_4.0.5 compiler_4.3.1 magrittr_2.0.3 cli_3.6.1
[6] Rcpp_1.0.11 bit64_4.0.5 coro_1.0.3 callr_3.7.3 ps_1.7.5
[11] rlang_1.1.2
The CUDA version is 12.2
(from nvidia-smi
). If it helps, I had to manually create several broken symlinks during installation to get torch to use the GPU. The symlinks are:
ln -s libcudart-e409450e.so.11.0 libcudart.so.11.0
ln -s libcublas-f6acd947.so.11 libcublas.so.11
ln -s libnvToolsExt-847d78f2.so.1 libnvToolsExt.so.1
Hope that clarifies!
I'm pretty sure the problem is caused by a ABI compatibility issue between CUDA11 (used by torch) and CUDA12 that you have installed on that environment. I suggest you to install torch using the pre-built binaries, that include a compatible CUDA and CuDNN versions.
You can do so by running somehting like:
options(timeout = 600) # increasing timeout is recommended since we will be downloading a 2GB file.
# For Windows and Linux: "cpu", "cu117" are the only currently supported
# For MacOS the supported are: "cpu-intel" or "cpu-m1"
kind <- "cu118"
version <- available.packages()["torch","Version"]
options(repos = c(
torch = sprintf("https://storage.googleapis.com/torch-lantern-builds/packages/%s/%s/", kind, version),
CRAN = "https://cloud.r-project.org" # or any other from which you want to install the other R dependencies.
))
install.packages("torch")
Hello,
Thanks for the quick response! I'll try the proposed solution.
I have a rather naive question though: will the pre-built binaries work even though CUDA 12.2 is installed on the system? The documentation seems to suggest so (i.e. If you have CUDA installed, it doesn’t need to match the installation ‘kind’ chosen below.
).
I am asking because the GPU is on a university-wide cluster and I cannot change the CUDA driver version...
With the pre-built binaries the globally installed cuda version doesn't matter, as the correct version is shipped within the package. That's actually a similar approach to what pytorch does.
Hello,
I noticed that I cannot compute the determinant of an identity matrix using torch in R.
i.e.
torch_eye(3)$cuda()$det()
It gives me this error:
I'm not sure what to make out of it? I tried computing the same determinant in pyTorch and it worked fine. Is this a bug or is this something to be expected?