Hello,
I am trying to run the dummy dataset using the given pyscenic workflow on the cluster using slurm script. However, when I assigned 180G memory usage, still it is giving me the error for memory exhaustion. When I submitted the job without any slurm script, it gave an overconsumption issue.
Can I please be suggested an alternative for it? Or can I please get to know how much the memory will be needed for running? This is a sample data given in the tutorial.
Hello, I am trying to run the dummy dataset using the given pyscenic workflow on the cluster using slurm script. However, when I assigned 180G memory usage, still it is giving me the error for memory exhaustion. When I submitted the job without any slurm script, it gave an overconsumption issue.
Showing you a glimpse of the error generated:
distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
Can I please be suggested an alternative for it? Or can I please get to know how much the memory will be needed for running? This is a sample data given in the tutorial.
I will be grateful for your help. Thanks.