Open JinKyu-Cheong opened 2 weeks ago
Hi! I just wanted to point out that I have the same problem. running out of memory at the "Calculating enrichment scores" part in the AUCell (extended for me tho) - I ran it with 1.9TB ram but that wasn't enough. On the report I got from the cluster it says that it tried to take 2.5 TB ram. I don't think that can be right I and there must be some bug in my files or something. Did you manage to figure this one out @JinKyu-Cheong ?
Thanks !
Hi @JinKyu-Cheong and @samuelheczko
This step can take some memory, but >2 TB seems excessive. How many genes and regions do you have?
All the best,
Seppe
Hi, Thanks, @SeppeDeWinter, for the answer! I managed to get through the phase by running the Snakemake steps individually in an interactive HCP session (as opposed to submitting a job) with:
snakemake -R --until
When I ran Snakemake without specifying the step, it executed eGRN_extended and immediately attempted AUCell_extended, where my workflow was interrupted. Instead, I first ran eGRN_direct using the command above and then executed both AUCell commands, followed by scplus_mudata. I allocated 10 cores with 32G each.
I’m not entirely sure why this worked, but perhaps someone else will find it useful as well!
Best, Sam
Hi
I keep having issues with the resources. I was stuck at the region-to-gene step with 250k cells, then I subset the data to 160k cells and I were able to preceed up to eGRN analysis. Howevere then I'm stuck at the AUCell step. I don't have any error logs to share since the kernel is shut down once memory dump happens. Our facility allows maximum 1tb and 70 cores per request. I kept using 1tb RAM and tried different numbers or cores, but changing the cores didn't help.
What else can I do to make it work?
Thanks!
log message bellow