I successfully installed scCNAutils on one of Compute Canada nodes, but when submitting the job to slurm the process gets killed because it requires more memory than what I have set.
I have set 500GB of RAM, used rcpp=TRUE, the job start and I have a few outputs ("-ge-filter.RData","-ge.RData", "-qc.pdf", "-qc.RData") but after a while I get:
Converting to coords and normalizing...
/localscratch/spool/slurmd/job41635361/slurm_script: line 10: 29534 Killed Rscript scCNA_rscript.r
slurmstepd: error: Detected 1 oom-kill event(s) in StepId=41635361.batch. Some of your processes may have been killed by the cgroup out-of-memory handler.
This happens during QC and determining communities(auto_cna_signal) before calling CNAs.
Do you have any suggestion to handle this memory issue?
Dear @jmonlong,
I successfully installed scCNAutils on one of Compute Canada nodes, but when submitting the job to slurm the process gets killed because it requires more memory than what I have set. I have set 500GB of RAM, used
rcpp=TRUE
, the job start and I have a few outputs ("-ge-filter.RData","-ge.RData", "-qc.pdf", "-qc.RData") but after a while I get:This happens during QC and determining communities(auto_cna_signal) before calling CNAs. Do you have any suggestion to handle this memory issue?
Best, Andy