Open wangpenhok opened 1 year ago
Hi @wangpenhok !
I suspect that here you have an indentation issue: you have 4 spaces instead of 2 after resources, and you specifications have not been parsed.
For a one-node non-distributed run, bcbio's logic in allocating resources with (-n 64) is
After these calculations, bcbio uses: 32 cores each with 192.1g
When bcbio runs a pipe, it accounts for the fact that every command in the pipe consumes RAM, so it has to decrease cores to fit into the RAM which happened in the command:
bwa mem -t 32 | bamsormadup threads=24
Still, these values are very high for this server. The memory is also consumed for the IO buffers. You need to try running bcbio with -n 7 or -n10, maximum with -n20.
Large core numbers -n only make sense in a distributed bcbio runs, when these cores are requested across many servers.
SN
Version info
bcbio_nextgen.py --version
):1.2.9lsb_release -ds
): Ubuntu 20.04.5 LTSTo Reproduce Exact bcbio command you have used:
Your yaml configuration file:
Supposably, when I set the number of all available cores as
-n 64
with the setup in my yaml file shown above, each job would occupy only 8 cores to performbwa mem
. However, when I checked the log files, both the debug-log and command log showed that the resources were not deployed as I wished. Besides, the pipeline repeatedly threw error indicating " Segmentation fault (core dumped) ", as is shown below. I have no idea how this happened and what should I do to fix it , could you please help me with this problem? Thanks~Log files (could be found in work/log)
debug-log
command-log
Segmentation fault error