Closed BenjaminSchwessinger closed 6 years ago
Please avoid pwatcher_type=fs_based
. It's very difficult to maintain. pwatcher_type=blocking
is much simpler.
Yet the pipeline only submits one job at a time waiting for the previous to finish.
This was fixed July 31. What version of FALCON are you using? It should be printed at the start, and also at the start of stderr
for each task. (Look in the task directories.)
You should see lines like this:
[INFO]Setting max_jobs to 1; was 8
Whenever it gets set to 1
, the following section of job-submissions will be sequential. In your case, you should see 40
before the variantCaller jobs are submitted. If you don't you have a software-integration problem. (This is unrelated to the pwatcher_type.)
For now, NPROC is hard-coded at 24
for Arrow. We'll fix that soon. But njobs
should work. It works for me.
Thanks. I seemed I found the issue, I think, with people hugging our hugemem queue with only one node left :). I was running the latest 0808 flacon. Should have checked.
Hi there, I am running the quiver step with the latest version of falcon falcon-2018.08.08-21.41-py2.7-ucs4-beta.tar.gz.
In the cfg file is set the quiver section as follows: [job.step.unzip.quiver] njobs=40 NPROC=24 MB=20000
It appears that this is parsed out proper as i can see the following when running fc_unzip.py
options={'JOB_QUEUE': 'hugemem.q', 'job_queue': 'hugemem.q', 'pwatcher_type': 'fs_based', 'use_tmpdir': False, 'MB': '20000', 'job_type': 'SGE', 'submit': 'qsub -S /bin/bash -sync y -V \\n-q ${JOB_QUEUE} \\n-N ${JOB_NAME} \\n-o "${JOB_STDOUT}" \\n-e "${JOB_STDERR}" \\n-pe threads ${NPROC} \\n-l virtual_free=${MB}M,h_vmem=${MB}M \\n"${JOB_SCRIPT}"', 'NPROC': '24', 'njobs': '40'})
Yet the pipeline only submits one job at a time waiting for the previous to finish. Any pointers how to fix this?