Open rimorob opened 1 month ago
I have this same problem.
Is it possible to force torch to use an existing CUDA-Pytorch installation on Linux (Ubuntu)?
Hello @rimorob,
When you choose cuda 12.6, it means you can neither install pytorch according to https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix nor R {torch} When you choose cuda 12.4, you have one pytorch version supported (2.5) that has not been patched yet and a second one with experimental support When you choose cuda 11.8, you can choose pytorch version from 2.1 to 2.5, many associated cudnn versions and {torch} v 0.13.0 available on CRAN.
So I would recommend the cuda version to be the result of what you want to run on your GPU, not the other way round.
That said, {torch} do try to follow the cuda version as best as possible, but there is no plan, there is only spare time, your support and your warmly welcome contributions !
I heavily recommend using the pre-built binaries from:
https://torch.mlverse.org/docs/articles/installation#pre-built
The pre-built binaries bundle the necessary CUDA and cudnn versions, so you don't need a global compatible system version of CUDA.
I have the same problem and tried @dfalbel's suggestion to install pre-built binaries. However, I get an "Access Denied" error when trying to download from torch-cdn (e.g. https://torch-cdn.mlverse.org/packages/cu117/0.13.0/). Can anyone confirm?
Did you run something like:
options(timeout = 600) # increasing timeout is recommended since we will be downloading a 2GB file.
kind <- "cu118"
version <- available.packages()["torch","Version"]
options(repos = c(
torch = sprintf("https://torch-cdn.mlverse.org/packages/%s/%s/", kind, version),
CRAN = "https://cloud.r-project.org" # or any other from which you want to install the other R dependencies.
))
install.packages("torch", type = "binary")
This definitely works for me - i have tried on colab notebook too quite recently:
https://colab.research.google.com/drive/1XBTt3mf6EE5mX_518xIIvoT51b4mj6LX?usp=sharing
Hm, the notebook works but how can I be sure it did not install from CRAN? Because when I check the URL with system(sprintf("curl https://torch-cdn.mlverse.org/packages/%s/%s/", kind, version), intern = T)
, I get the same Access Denied error in Collab as well.
If you installed from CRAN, GPU code wouuldn't work because colab doesn't have a compatible version of CUDA.
That URL you are creating doesn't really need to exist. You need to look at the internals of install.packages()
if you want to find the actual URL that it uses to download the file. Usually, it's looking for a PACKAGES
file under the the directory.
The binary route does NOT work for me:
`> options(timeout = 600) # increasing timeout is recommended since we will be downloading a 2GB file.
For Windows and Linux: "cpu", "cu117" are the only currently supported
For MacOS the supported are: "cpu-intel" or "cpu-m1"
kind <- "cu118" version <- available.packages()["torch","Version"] options(repos = c(
- torch = sprintf("https://torch-cdn.mlverse.org/packages/%s/%s/", kind, version),
- CRAN = "https://cloud.r-project.org" # or any other from which you want to install the other R dependencies.
- )) install.packages("torch", type = "binary") Error in install.packages : type 'binary' is not supported on this platform version [1] "0.13.0" repos Error: object 'repos' not found options('repos') $repos torch CRAN "https://torch-cdn.mlverse.org/packages/cu118/0.13.0/" "https://cloud.r-project.org"
`
> .libPaths() [1] "/home/boris/R/x86_64-pc-linux-gnu-library/4.4" "/usr/local/lib/R/site-library" "/usr/lib/R/site-library" [4] "/usr/lib/R/library"
I've just installed CUDA 12.6 on a fresh system. Turns out, R Torch only supports 11.7/.8 for now. Are there any near-term plans to extend support to newer CUDA versions, or should I downgrade CUDA if I'm planning to use R Torch?