soedinglab / plass

sensitive and precise assembly of short sequencing reads
https://plass.mmseqs.com
GNU General Public License v3.0
132 stars 14 forks source link

Assembling big data #14

Open lzaramela opened 4 years ago

lzaramela commented 4 years ago

Hey, I have a big dataset (>600M paired-end reads) and I am trying to generate a protein catalog using Plass. I am using the version 2.c7e35 in a server with 900Gb ram. The processing is ending without completion due to exceeding the resources requested. I am wondering if it is possible to tweak the parameters to allocate less memory. Any input will be greatly appreciated. Thanks, Livia

milot-mirdita commented 4 years ago

Hi Livia,

Could you please post the log of the run? Plass should split up the work so it always fits into the available memory.

Best regards, Milot

lzaramela commented 4 years ago

Sure... here is the log file PLASS_West.txt

I got the following message: Execution terminated Exit_status=271 resources_used.cput=46:09:04 resources_used.mem=531170280kb resources_used.vmem=832592604kb resources_used.walltime=42:54:33

martin-steinegger commented 4 years ago

Thanks a lot! How much memory does your machine have? Normally Plass try to split the database if it does not fit in memory.

lzaramela commented 4 years ago

CentOS server, I can use up to 900Gb ram.

martin-steinegger commented 4 years ago

So it seems that the extractorfs step is hanging, which mostly requires IO. Is it possible that the tmp folder is on some slow network share?

One trick to reduce the amount of sequences extracted is to increase the minimum orf length with --min-length (default: 20).