Closed kaizen89 closed 1 year ago
Hi @kaizen89,
I could not see anything that is necessarily not in place, you could try running it in an interactive session and perhaps seeing if there is any message, or it just starts using CPU?
Would also set use_available
to false, and just use it with conda_env
as you do.
We set up more in-depth tutorials with related environment setup that might be worth a shot: https://ccc-protocols.readthedocs.io/en/latest/notebooks/ccc_R/QuickStart.html https://github.com/saezlab/ccc_protocols/tree/main/env_setup
Let me know if either helps.
You could also give GPU acceleration a shot via the tutorial's Python version, as then at least you stay in the same language (reducing complexity a bit).
At first I was not able to install the environment using the env_setup.sh
, conda was stuck for too long, after several reinstallations I managed to do it using mamba
and the two .yml
files you provided. GPU was then running but had other errors related to Cuda out of memory
, and setting max_split_size
did not help, I guess the dataset was too big. In the end, it's working well with a dataset of 80K cells.
Thanks for your help!
Hi, Im trying to run
liana_tensor_c2c
on HPC usingdevice=cuda:o
, however, it seems that cpu is used instead. Here's my RscriptAnd the Slurm script to execute it:
Slurm output
srun --jobid=5099174 nvidia-smi
Any help would be appreciated. Thanks!