bahlolab / PLASTER

Nextflow pipeline for long amplicon typing of PacBio SMRT sequencing data
MIT License
2 stars 3 forks source link

Java Runtime Environment Native memory allocation error #18

Closed satheadwait closed 2 years ago

satheadwait commented 2 years ago

Hi,

Once again thank you for your previous help with the single sample issue. I still encounter the following error for typing step.

I am encountering the below error for java memory allocation. Could you please help me through solving this?

Command:

nextflow run bahlolab/PLASTER -profile typing,singularity -c singlesample.typing.PLASTER_TEST.config -resume

Error:

#

There is insufficient memory for the Java Runtime Environment to continue.

Native memory allocation (mmap) failed to map 65536 bytes for committing reserved memory.

Possible reasons:

The system is out of physical RAM or swap space

The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap

Possible solutions:

Reduce memory load on the system

Increase physical memory or swap space

Check if swap backing store is full

Decrease Java heap size (-Xmx/-Xms)

Decrease number of Java threads

Decrease Java thread stack sizes (-Xss)

Set larger code cache with -XX:ReservedCodeCacheSize=

JVM is running with Zero Based Compressed Oops mode in which the Java heap is

placed in the first 32GB address space. The Java Heap base address is the

maximum limit for the native heap growth. Please use -XX:HeapBaseMinAddress

to set the Java Heap base and to place the Java Heap above 32GB virtual address.

This output file may be truncated or incomplete.

#

Out of Memory Error (os_linux.cpp:2795), pid=151466, tid=0x00002b1189cb5700

#

JRE version: OpenJDK Runtime Environment (8.0_322-b06) (build 1.8.0_322-b06)

Java VM: OpenJDK 64-Bit Server VM (25.322-b06 mixed mode linux-amd64 compressed oops)

Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again

#

Regards. Adwait

jemunro commented 2 years ago

Hi Adwait,

Is there enough free system memory for you to run the pipeline?

Can you share the contents your config file singlesample.typing.PLASTER_TEST.config?

satheadwait commented 2 years ago

I am running it on a super cluster called TACC stampede2: https://portal.tacc.utexas.edu/user-guides/stampede2#system-overview using normal mode. It seems there should be enough space.

singlesample.typing.PLASTER_TEST.config: params { manifest = '/scratch/01775/saathe/PACBIO/DHCHL/PLASTER_TEST/output/sample_amplicon_bam_manifest.csv' amplicons_json = '/scratch/01775/saathe/PACBIO/DHCHL/PLASTER_TEST/amplicons.json' ref_fasta = 'http://ftp.ensembl.org/pub/release-106/fasta/mus_musculus/dna/Mus_musculus.GRCm39.dna.chromosome.5.fa.gz' }

regards. Adwait

jemunro commented 2 years ago

Hi Adwait,

Looks like your cluster is running SLURM. I'm guessing the memory on you login node might be constrained. I recommend you use the slurm profile to have jobs distributed across your SLURM queue: nextflow run bahlolab/PLASTER -profile typing,singularity,slurm -c singlesample.typing.PLASTER_TEST.config -resume