Open rrlove-cdc opened 1 year ago
@rrlove-cdc Did you solve this problem? I run into the same bug as yours in NFCORE_SAREK:SAREK:FASTQC
. I will be grateful if you share information. Thanks.
[nf-core/sarek] Pipeline completed with errors-
ERROR ~ Error executing process > 'NFCORE_SAREK:SAREK:FASTQC (test-test_L2)'
Caused by:
Process `NFCORE_SAREK:SAREK:FASTQC (test-test_L2)` terminated with an error exit status (1)
Command executed:
printf "%s %s\n" test_1.fastq.gz test-test_L2_1.gz test_2.fastq.gz test-test_L2_2.gz | while read old_name new_name; do
[ -f "${new_name}" ] || ln -s $old_name $new_name
done
fastqc \
--quiet \
--threads 2 \
--memory 6656 \
test-test_L2_1.gz test-test_L2_2.gz
cat <<-END_VERSIONS > versions.yml
"NFCORE_SAREK:SAREK:FASTQC":
fastqc: $( fastqc --version | sed '/FastQC v/!d; s/.*v//' )
END_VERSIONS
Command exit status:
1
Command output:
Error occurred during initialization of VM
Could not reserve enough space for 13631488KB object heap
Command wrapper:
Error occurred during initialization of VM
Could not reserve enough space for 13631488KB object heap
@Truongphikt, I solved this problem by using an institution-specific config file provided by our HPC admins. I don't know which line(s) in the file solved the problem, but similar to what I listed above, there's a line that defines Java memory usage with scope env (ex: NFX_OPTS="-Xms=256m -Xmx=4g"
) and a section with scope process that requests extra memory for the Java overhead (ex: clusterOptions = { "-l h_vmem=${(check_max((task.memory.toGiga())+10), 'memory').toString().replaceAll(/[\sB]/,'')}G" }
). Those seem the most likely candidates to me.
Description of the bug
The sarek pipeline repeatedly errors out at the GATK MarkDuplicates step with the message:
"Error occurred during initialization of VM Could not reserve enough space for 50331648KB object heap"
Sometimes an error message for "FASTQ_ALIGN_BWAMEM_MEM2_DRAGMAP_SENTIEON:BWAMEM1_MEM" also appears.
With help from our HPC support team, I have tried:
In all cases, GATK still tries to request a larger heap size, so the custom settings don't seem to be getting passed to the program. A member of the HPC support team suggested the issue might be solvable by adding 'ext.args = "-Xmx8g"' in the module-level config files rather than the user-level config file.
Command used and terminal output
Relevant files
nextflow.log.zip sarek_test.zip
System information
Nextflow version 23.04.1, nf-core/sarek v3.3.2-gf034b73 HPC + singularity on SGE CentOS