Closed gbdias closed 5 years ago
If you can read Chinese, I write a tutorial about how to run Falcon locally on the E. coli databases in https://www.jianshu.com/p/2872cc26c49a.
For me, I have 96 cpu and 512 GB, and my cfg is
...
falcon_sense_option=--output-multi --min-idt 0.70 --min-cov 4 --max-n-read 200
...
overlap_filtering_setting=--max-diff 100 --max-cov 150 --min-cov 2
...
[job.defaults]
job_type=local
pwatcher_type=blocking
JOB_QUEUE=default
MB=32768
NPROC=6
njobs=40
submit = /bin/bash -c "${JOB_SCRIPT}" > "${JOB_STDOUT}" 2> "${JOB_STDERR}"
[job.step.da]
NPROC=4
MB=32768
njobs=20
[job.step.la]
NPROC=4
MB=16384
njobs=30
[job.step.cns]
NPROC=4
MB=65536
njobs=25
##40 * 4 need more than 512 G memoery
[job.step.pda]
NPROC=4
MB=32768
njobs=15
[job.step.pla]
NPROC=4
MB=16384
njobs=30
[job.step.asm]
NPROC=50
MB=196608
njobs=1
The threads will be used in "Pre-assembly" is 25(njobs) x 4(NPROC)=100
Hi @xuzhougeng,
Thanks for your post.
So, you do not provide the --n_core
parameter for the falcon_sense_option
and overlap_filtering_setting
?
@gbdias YES. Because I find that this option will be pass to falcon_sense_option
through the setting of NPROC
in [job.step.cns]
. So, you don't need to provide --n_core
in falcon_sense_option
.
Most of this is fixed in the current release. Please re-open if still a problem.
Hello,
I have read several threads on this but I am still struggling with resource allocation on the FALCON config file for PB-assembly.
First, how do the number of jobs and processors in the
[job.defaults]
section correlates with the--n_core
parameter in thefalcon_sense_option
andoverlap_filtering_setting
?Second, what happens if I do not set a limit memory per processor using the
MB =
parameter?In general I want to maximize resource usage in local mode to assemble a fly genome (180 Mb). I have a 48 core machine with ~500Gb of RAM.
Bellow is my latest config file, any help is welcomed.