Closed olechnwin closed 3 years ago
just a follow-up. In case this provides more info. I tried changing the label density from 15 to 9 and minimum labels from 8 to 3 but still got the same error.
Thanks @olechnwin for posting this. I found a bug in the script.
1352 if [[ -z "$sf" ]]; then
1353 if [[ "$plt" == "irys" ]]; then
1354 sf=0.15
1355 else
1356 len=0.12
1357 fi
1358 fi
len=0.12
should be sf=0.12
. The bug has been fixed in the repo and please update your clone and have a try
@yyx8671,
Thank you for that quick fix. Looks like it's currently running fine.
@yyx8671 ,
My runBNG denovo got an out of memory error. I'm trying to figure out how much memory I should request. I'm hoping you can provide some insight. This is the setting I have:
## The number of threads is: 18
## Large jobs maximum memory (GB) is: 128
## Small jobs maximum memory (GB) is: 8
## The number of threads for each subjob is: 2
From our job statistics output,
refineB1: waited 27000.00 seconds for job completion: UnsubmittedJobs= 514, ActiveJobs= 9, FinishedJobs= 8
slurmstepd: error: Detected 1 oom-kill event(s) in step 355938.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
Job Statistics for 355938:
JobID User Start End Elapsed MaxRSS TotalCPU State Exit NodeList ReqTRES
---------------- ---------- ------------------- ------------------- ---------- ---------- ---------- ---------- ---- --------------- ----------------------------------------
355938 xxt050 2021-04-09T08:28:09 2021-04-10T20:28:35 1-12:00:26 27-04:01:+ TIMEOUT 0:0 r1pl-hpcf-n17 billing=20,cpu=20,mem=156240M,node=1
355938.batch 2021-04-09T08:28:09 2021-04-10T20:29:05 1-12:00:56 142602.17M 27-04:01:+ OUT_OF_ME+ 0:1+ r1pl-hpcf-n17
355938.extern 2021-04-09T08:28:09 2021-04-10T20:28:35 1-12:00:26 0 00:00.001 COMPLETED 0:0 r1pl-hpcf-n17
CPU Efficiency: 90.54% of 30-00:08:40 core-walltime
Can you please explain with the large jobs and small jobs how much memory it needs with the threads settings ? For example, with thread=18 and max memory large jobs = 128GB, max memory small jobs = 8GB and thread subjob=2, does that mean it requires a total of 18 128GB + 2 8GB memory? Thanks!
Hi @olechnwin,
This depends on your data, for instance the coverage that your data have. You may ask for '196 Gb' RAM in your slurm script to run if your server can provide.
Hi @yyx8671,
Thanks so much for your quick reply. Can I ask where does 196Gb come from? The reason I'm asking is that my server has a total of 32 cpus and ~250Gb memory. I was requesting for 20 cpus which by default gets me ~156Gb memory.
Hi @olechnwin,
For the default setting in the Bionano configration file, the max RAM required is '248 Gb'. Here I suggested a rough number (196 Gb) for your consideration, as from my previous tests, '196 Gb' was sufficient for my tasks.
Most of the steps use a small memory and only some steps use a huge memory. If you cannot request a large memory on your server, you may reduce the CPUs and the total number of parallel subjobs, but this will consume more walltime to complete your task.
Hi @yyx8671 ,
I see. Thanks so much for your explanation. That helps.
Hi, I decided to create a new post with the same title since I thought it would be easier. Please let me know if you want me to merge this thread to the other one.
So, I ran the
runBNG denovo
and got theno align files error
. I'm copying the command I ran and the outputs. I've also copied part of "exp_pipelineReport_05.txt
" below. Please let me know if you need to see anything else. Thank you again for your help.This is the content of "
exp_pipelineReport_05.txt
"