TheoreticalEcology / s-jSDM

Scalable joint species distribution modeling
https://cran.r-project.org/web/packages/sjSDM/index.html
GNU General Public License v3.0
67 stars 14 forks source link

Error: object 'torch' not found in the example Pichler and Hartig- 2020 #116

Closed YJ781 closed 1 year ago

YJ781 commented 1 year ago

Hello developers,

Thank you so much for your dedication to developing the sjSDM package. I encountered an issue when running the example scripts from Pichler and Hartig-2020. Specifically, I'm having trouble with the 'torch' object when executing commands containing torch$ in the 1_cpu_sjSDM.R script, as well as other files. A few examples of the commands with torch$:

torch$set_num_threads(6L) torch$manual_seed(42L) torch$cuda$manual_seed(42L)

I receive the error message: "object 'torch' not found."

I have attempted to resolve the issue by using version 0.0.7.9000 of the sjSDM package, but the error persists. However, I have observed that other functions in the script execute without any issues.

I would also like to inquire whether this error implies a failure of parallelization.

Thank you in advance for your help!

MaximilianPi commented 1 year ago

Hi @YJ781,

The torch object is the binding to the pytorch (python) package and in the old version it is exported by sjSDM (as a global variable).

  1. Old package versions (0.1.8/0.0.7.9000): So if the torch installation was successful, the object should be accessible after loading sjSDM. You said that the other functions such as sjSDM(…) work, right? If so, it should be available, if none of the following commands work, something is wrong with your pytorch installation:

    library(sjSDM)
    torch = sjSDM::torch # binding is available via global variable
    torch = reticulate::import(“torch”) # import pytorch directly via reticulate 
  2. New package versions (starting with 1.0.0): The torch binding is not directly exported as a global variable, it is available via sjSDM:::pkg.env$torch

I would also like to inquire whether this error implies a failure of parallelization.

No, it is used to set seeds for pytorch (which doesn’t work properly for GPUs) and to limit the number of cores (the default is to use all cores). So the torch$... commands don’t affect your parallelization.

YJ781 commented 1 year ago

Hi @MaximilianPi

I truly appreciate your prompt response. As my current setup uses the newer package version, I gave sjSDM:::pkg.env$torch a try and it worked perfectly. Thanks again!