Closed a-velt closed 6 years ago
Are you running this locally?
What does top
show? Is there is a process running and/or consuming memory?
Yes I run this locally, but I privatize a complete computation node for the analysis.
In the configuration file I am careful not to use more than 32 CPUs, but apparently 36 are used. And the node has only 32, which poses perhaps problem?
I reduced the number of CPUs to be used,and run FALCON. See the log :
############### RUN OF UNZIP #############
fc_unzip.py fc_unzip.cfg
falcon-unzip 1.1.2
falcon-kit 1.2.2
pypeflow 2.0.4
[INFO]Setup logging from file "None".
[INFO]Using config=
{'General': {'max_n_open_files': '900000'},
'Unzip': {'input_bam_fofn': 'input_bam.fofn', 'input_fofn': 'input.fofn'},
'job.defaults': {'NPROC': '3',
'job_type': 'string',
'njobs': '6',
'pwatcher_type': 'blocking',
'submit': 'bash -C ${CMD} >| ${STDOUT_FILE} 2>| ${STDERR_FILE}',
'use_tmpdir': False},
'job.step.unzip.blasr_aln': {'NPROC': '10', 'njobs': '2'},
'job.step.unzip.hasm': {'NPROC': '20', 'njobs': '1'},
'job.step.unzip.phasing': {'NPROC': '2', 'njobs': '10'},
'job.step.unzip.quiver': {'NPROC': '10', 'njobs': '2'},
'job.step.unzip.track_reads': {'NPROC': '20', 'njobs': '1'},
'max_n_open_files': '900000'}
[INFO]PATH=/cm/shared/apps/FALCON_12_09_2018/bin/:/cm/shared/apps/Perl_conda/bin:/cm/shared/apps/slurm/14.03.0/sbin:/cm/shared/apps/slurm/14.03.0/bin:/cm/local/apps/cluster-tools/bin:/cm/local/apps/cmd/sbin:/cm/local/apps/c
md/bin:/cm/shared/apps/cmgui:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/sbin:/root/bin
[INFO]$('which which')
/usr/bin/which
[INFO]$('which blasr')
/cm/shared/apps/FALCON_12_09_2018/bin/blasr
[INFO]$('which samtools')
/cm/shared/apps/FALCON_12_09_2018/bin/samtools
[INFO]$('which pbalign')
/cm/shared/apps/FALCON_12_09_2018/bin/pbalign
[INFO]$('which variantCaller')
/cm/shared/apps/FALCON_12_09_2018/bin/variantCaller
[INFO]$('which minimap2')
/cm/shared/apps/FALCON_12_09_2018/bin/minimap2
[INFO]$('which nucmer')
/cm/shared/apps/FALCON_12_09_2018/bin/nucmer
[INFO]$('which show-coords')
/cm/shared/apps/FALCON_12_09_2018/bin/show-coords
[INFO]$('which fc_rr_hctg_track2.exe')
/cm/shared/apps/FALCON_12_09_2018/bin/fc_rr_hctg_track2.exe
[INFO]$('nucmer --version')
nucmer
NUCmer (NUCleotide MUMmer) version 3.1
[INFO]$('minimap2 --version')
2.12-r827
[INFO]$ show-coords -h >
[INFO]$ samtools >
[INFO]samtools ['1', '9'] is >= 1.3
[WARNING]CD: '0-rawreads' <- '/data2/avelt/Assembly_amurensis'
[WARNING]CD: '0-rawreads' -> '/data2/avelt/Assembly_amurensis'
[WARNING]CD: '1-preads_ovl' <- '/data2/avelt/Assembly_amurensis'
[WARNING]CD: '1-preads_ovl' -> '/data2/avelt/Assembly_amurensis'
[INFO]Falcon directories up-to-date.
[INFO]In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.blocking' from '/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/blocking.pyc'>
[INFO]job_type='string', (default)job_defaults={'pwatcher_type': 'blocking', 'use_tmpdir': False, 'job_type': 'string', 'submit': 'bash -C ${CMD} >| ${STDOUT_FILE} 2>| ${STDERR_FILE}', 'NPROC': '3', 'njobs': '6'}, use_tmpdi
r=False, squash=False, job_name_style=0
[INFO]Setting max_jobs to 6; was None
[INFO]config=
{'Unzip': {'input_bam_fofn': 'input_bam.fofn', 'input_fofn': 'input.fofn'}, 'job.step.unzip.blasr_aln': {'njobs': '2', 'NPROC': '10'}, 'max_n_open_files': '900000', 'job.step.unzip.hasm': {'njobs': '1', 'NPROC': '20'}, 'Ge
neral': {'max_n_open_files': '900000'}, 'job.step.unzip.track_reads': {'njobs': '1', 'NPROC': '20'}, 'job.step.unzip.phasing': {'njobs': '10', 'NPROC': '2'}, 'job.step.unzip.quiver': {'njobs': '2', 'NPROC': '10'}, 'job.defa
ults': {'pwatcher_type': 'blocking', 'use_tmpdir': False, 'job_type': 'string', 'submit': 'bash -C ${CMD} >| ${STDOUT_FILE} 2>| ${STDERR_FILE}', 'NPROC': '3', 'njobs': '6'}}
[INFO]Num unsatisfied: 5, graph: 5
[INFO]About to submit: Node(3-unzip/reads/dump_rawread_ids)
[INFO]About to submit: Node(3-unzip/reads/dump_pread_ids)
[INFO]Popen: '/bin/bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/reads/dump_rawread_ids/run-P353f203c57debb.bash.stdout 2>| /dat
a2/avelt/Assembly_amurensis/3-unzip/reads/dump_rawread_ids/run-P353f203c57debb.bash.stderr'
[INFO]Popen: '/bin/bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/reads/dump_pread_ids/run-Pf37e6d7a5ecef7.bash.stdout 2>| /data2
/avelt/Assembly_amurensis/3-unzip/reads/dump_pread_ids/run-Pf37e6d7a5ecef7.bash.stderr'
[INFO](slept for another 0.0s -- another 1 loop iterations)
[INFO](slept for another 0.3s -- another 2 loop iterations)
[INFO]recently_satisfied:
set([Node(3-unzip/reads/dump_pread_ids)])
[INFO]Num satisfied in this iteration: 1
[INFO]Num still unsatisfied: 4
[INFO](slept for another 0.4s -- another 3 loop iterations)
[INFO](slept for another 1.4s -- another 4 loop iterations)
[INFO]recently_satisfied:
set([Node(3-unzip/reads/dump_rawread_ids)])
[INFO]Num satisfied in this iteration: 1
[INFO]Num still unsatisfied: 3
[INFO]About to submit: Node(3-unzip/reads/get_read_ctg_map)
[INFO]Popen: '/bin/bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/reads/get_read_ctg_map/run-Pcec393c558f9bf.bash.stdout 2>| /dat
a2/avelt/Assembly_amurensis/3-unzip/reads/get_read_ctg_map/run-Pcec393c558f9bf.bash.stderr'
[INFO](slept for another 2.2s -- another 5 loop iterations)
[INFO](slept for another 2.7s -- another 6 loop iterations)
[INFO]recently_satisfied:
set([Node(3-unzip/reads/get_read_ctg_map)])
[INFO]Num satisfied in this iteration: 1
[INFO]Num still unsatisfied: 2
[INFO]About to submit: Node(3-unzip/reads)
[INFO]Popen: 'bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/reads/run-Pa39f70dc249148.bash.stdout 2>| /data2/avelt/Assembly_amur
ensis/3-unzip/reads/run-Pa39f70dc249148.bash.stderr'
[INFO](slept for another 3.3s -- another 7 loop iterations)
[INFO](slept for another 6.0s -- another 8 loop iterations)
[INFO](slept for another 14.4s -- another 9 loop iterations)
[INFO](slept for another 25.5s -- another 10 loop iterations)
[INFO](slept for another 39.6s -- another 11 loop iterations)
[INFO](slept for another 57.0s -- another 12 loop iterations)
[INFO](slept for another 78.0s -- another 13 loop iterations)
[INFO](slept for another 102.9s -- another 14 loop iterations)
[INFO](slept for another 132.0s -- another 15 loop iterations)
I defined that the maximum number of cpus to use is 20 but when I look at htop :
It seems to me that more than 20 CPUs are used, which creates the iterations we see in the log.
I reduced the number of CPUs, falcon_unzip doesn't perform. This is a real problem for me because I received new Pacbio data. The version I had of Falcon didn't work on it (or doesn't work anymore) and I have to give results. Do you have any idea of the problem? I have to run Falcon in local because we don't have SGE but Slurm. And I run FALCON on a node that I fully reserve for this analysis, with 32CPUs and 386GB of RAM. I don't understand the problem. Thanks a lot for your help.
Here my configuration file for Falcon-unzip :
[General] max_n_open_files = 900000
[Unzip]
input_fofn= input.fofn input_bam_fofn= input_bam.fofn
[job.defaults] NPROC=2 njobs=5 job_type = local
pwatcher_type = blocking job_type = string submit = bash -C ${CMD} >| ${STDOUT_FILE} 2>| ${STDERR_FILE}
njobs=6 NPROC=2 [job.step.unzip.track_reads] njobs=1 NPROC=15 [job.step.unzip.blasr_aln] njobs=2 NPROC=5 [job.step.unzip.phasing] njobs=5 NPROC=2 [job.step.unzip.hasm] njobs=1 NPROC=15 [job.step.unzip.quiver] njobs=2 NPROC=5
And always the same problem of "loop iterations":
[INFO]Popen: '/bin/bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/0-phasing/phasing-chunks/001571F/run-P4709c2318a c341.bash.stdout 2>| /data2/avelt/Assembly_amurensis/3-unzip/0-phasing/phasing-chunks/001571F/run-P4709c2318ac341.bash.stderr' [INFO]recently_satisfied: set([Node(3-unzip/0-phasing/phasing-chunks/001571F)]) [INFO]Num satisfied in this iteration: 1 [INFO]Num still unsatisfied: 5246 [INFO]About to submit: Node(3-unzip/0-phasing/000632F) [INFO]Popen: 'bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/0-phasing/000632F/run-P47db72a92e596c.bash.stdout 2>| /data2/avelt/Assembly_amurensis/3-unzip/0-phasing/000632F/run-P47db72a92e596c.bash.stderr' [INFO](slept for another 191.1s -- another 71 loop iterations) [INFO](slept for another 646.0s -- another 72 loop iterations) [INFO](slept for another 730.0s -- another 73 loop iterations) [INFO](slept for another 740.0s -- another 74 loop iterations) [INFO](slept for another 750.0s -- another 75 loop iterations) [INFO](slept for another 760.0s -- another 76 loop iterations) [INFO](slept for another 770.0s -- another 77 loop iterations) [INFO](slept for another 780.0s -- another 78 loop iterations) [INFO](slept for another 790.0s -- another 79 loop iterations) [INFO](slept for another 800.0s -- another 80 loop iterations) [INFO](slept for another 810.0s -- another 81 loop iterations) [INFO](slept for another 820.0s -- another 82 loop iterations) [INFO](slept for another 830.0s -- another 83 loop iterations) [INFO](slept for another 840.0s -- another 84 loop iterations) [INFO](slept for another 850.0s -- another 85 loop iterations) [INFO](slept for another 860.0s -- another 86 loop iterations) [INFO](slept for another 870.0s -- another 87 loop iterations) [INFO](slept for another 880.0s -- another 88 loop iterations) [INFO](slept for another 890.0s -- another 89 loop iterations) [INFO](slept for another 900.0s -- another 90 loop iterations) [INFO](slept for another 910.0s -- another 91 loop iterations) [INFO](slept for another 920.0s -- another 92 loop iterations) [INFO](slept for another 930.0s -- another 93 loop iterations) [INFO](slept for another 940.0s -- another 94 loop iterations) [INFO](slept for another 950.0s -- another 95 loop iterations) [INFO](slept for another 960.0s -- another 96 loop iterations) [INFO](slept for another 970.0s -- another 97 loop iterations) [INFO](slept for another 980.0s -- another 98 loop iterations) [INFO](slept for another 990.0s -- another 99 loop iterations) [INFO](slept for another 1000.0s -- another 100 loop iterations) [INFO](slept for another 1010.0s -- another 101 loop iterations) [INFO](slept for another 1020.0s -- another 102 loop iterations) [INFO](slept for another 1030.0s -- another 103 loop iterations) [INFO](slept for another 1040.0s -- another 104 loop iterations) [INFO](slept for another 1050.0s -- another 105 loop iterations) [INFO](slept for another 1060.0s -- another 106 loop iterations) [INFO](slept for another 1070.0s -- another 107 loop iterations) [INFO](slept for another 1080.0s -- another 108 loop iterations) [INFO](slept for another 1090.0s -- another 109 loop iterations) [INFO](slept for another 1100.0s -- another 110 loop iterations) [INFO](slept for another 1110.0s -- another 111 loop iterations) [INFO](slept for another 1120.0s -- another 112 loop iterations) [INFO](slept for another 1130.0s -- another 113 loop iterations) [INFO](slept for another 1140.0s -- another 114 loop iterations) [INFO](slept for another 1150.0s -- another 115 loop iterations) [INFO](slept for another 1160.0s -- another 116 loop iterations) [INFO](slept for another 1170.0s -- another 117 loop iterations) [INFO](slept for another 1180.0s -- another 118 loop iterations) [INFO](slept for another 1190.0s -- another 119 loop iterations) [INFO](slept for another 1200.0s -- another 120 loop iterations) [INFO](slept for another 1210.0s -- another 121 loop iterations) [INFO](slept for another 1220.0s -- another 122 loop iterations) [INFO](slept for another 1230.0s -- another 123 loop iterations) [INFO](slept for another 1240.0s -- another 124 loop iterations) [INFO](slept for another 1250.0s -- another 125 loop iterations) [INFO](slept for another 1260.0s -- another 126 loop iterations) [INFO](slept for another 1270.0s -- another 127 loop iterations) [INFO](slept for another 1280.0s -- another 128 loop iterations) [INFO](slept for another 1290.0s -- another 129 loop iterations) [INFO](slept for another 1300.0s -- another 130 loop iterations) [INFO](slept for another 1310.0s -- another 131 loop iterations) [INFO](slept for another 1320.0s -- another 132 loop iterations) [INFO](slept for another 1330.0s -- another 133 loop iterations) [INFO](slept for another 1340.0s -- another 134 loop iterations) [INFO](slept for another 1350.0s -- another 135 loop iterations) [INFO](slept for another 1360.0s -- another 136 loop iterations) [INFO](slept for another 1370.0s -- another 137 loop iterations) [INFO](slept for another 1380.0s -- another 138 loop iterations) [INFO](slept for another 1390.0s -- another 139 loop iterations) [INFO](slept for another 1400.0s -- another 140 loop iterations) [INFO](slept for another 1410.0s -- another 141 loop iterations) [INFO](slept for another 1420.0s -- another 142 loop iterations) [INFO](slept for another 1430.0s -- another 143 loop iterations) [INFO](slept for another 1440.0s -- another 144 loop iterations) [INFO](slept for another 1450.0s -- another 145 loop iterations) [INFO](slept for another 1460.0s -- another 146 loop iterations) [INFO](slept for another 1470.0s -- another 147 loop iterations) [INFO](slept for another 1480.0s -- another 148 loop iterations) [INFO](slept for another 1490.0s -- another 149 loop iterations) [INFO](slept for another 1500.0s -- another 150 loop iterations) [INFO](slept for another 1510.0s -- another 151 loop iterations)
I defined that the maximum number of cpus to use is 20 but when I look at htop :
It seems to me that more than 20 CPUs are used, which creates the iterations we see in the log.
I don't see that 20 cpus are being used from htop
output. What i'm really interested in is what processes are running; the MEM & CPU they are consuming - from top
output, not just the header from htop
I reduced the number of CPUs, falcon_unzip doesn't perform. This is a real problem for me because I received new Pacbio data. The version I had of Falcon didn't work on it (or doesn't work anymore) and I have to give results. Do you have any idea of the problem? I have to run Falcon in local because we don't have SGE but Slurm. And I run FALCON on a node that I fully reserve for this analysis, with 32CPUs and 386GB of RAM. I don't understand the problem. Thanks a lot for your help.
If the process is actually alive but taking a while, reducing the CPUs won't make it go any faster. Why can't you run your FALCON w/ the Slurm
scheduler? It should work fine. See the documents for pb-assembly here and pypeFLOW here
I changed of node to run the analysis and the "unzip step" worked. I think I have a problem with the node where I launched the analysis the first time. But no matter, it works :)
So, "3-unzip" works well, with the files "all_p_ctg.fa" and "all_h_ctg.fa" generated, but quiver doesn't work. I have the impression that it is having an error at the end. Have you ever met this?
Thank you very much in advance.
Here the folder 4-quiver. The last step that has worked seems to be quiver-run :
There are 2639 folders in quiver-run, so there are steps that have worked, but not all.
Here the log with the error :
[INFO]Popen: 'bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash
.stdout 2>| /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash.stderr'
[ERROR]Task Node(4-quiver/quiver-run/000153Fp01_00001) failed with exit-code=1
[ERROR]Some tasks are recently_done but not satisfied: set([Node(4-quiver/quiver-run/000153Fp01_00001)])
[ERROR]ready: set([Node(4-quiver/quiver-run/001238Fp01_00002), Node(4-quiver/quiver-run/000889Fp01), Node(4-quiver/quiver-run/000200Fp01), Node(4-quiver/quiver-run/002254Fp02), Node(4-quiver/quiver-run/001827
Fp01), Node(4-quiver/quiver-run/000342Fp01), Node(4-quiver/quiver-run/000474Fp01), Node(4-quiver/quiver-run/000078Fp01_00001), Node(4-quiver/quiver-run/001592Fp01_00001), Node(4-quiver/quiver-run/001250Fp01),
Node(4-quiver/quiver-run/000682Fp01), Node(4-quiver/quiver-run/001979Fp01), Node(4-quiver/quiver-run/001033Fp01_00001), Node(4-quiver/quiver-run/000477Fp01), Node(4-quiver/quiver-run/000495Fp01), Node(4-quiv
er/quiver-run/002699Fp01), Node(4-quiver/quiver-run/000450Fp01_00001), Node(4-quiver/quiver-run/001441Fp01), Node(4-quiver/quiver-run/000209Fp01), Node(4-quiver/quiver-run/000584Fp01_00001), Node(4-quiver/qui
...................
ver-run/000645Fp01), Node(4-quiver/quiver-run/000637Fp01), Node(4-quiver/quiver-run/000408Fp01_00003), Node(4-quiver/quiver-run/000176Fp01_00001), Node(4-quiver/quiver-run/002065Fp01), Node(4-quiver/quiver-ru
n/000405Fp01_00002), Node(4-quiver/quiver-run/000675Fp01_00001), Node(4-quiver/quiver-run/000735Fp01_00001), Node(4-quiver/quiver-run/002196Fp02), Node(4-quiver/quiver-run/000743Fp01), Node(4-quiver/quiver-ru
n/000559Fp01_00001), Node(4-quiver/quiver-run/000317Fp01_00002)])
submitted: set([Node(4-quiver/quiver-run/000484Fp01_00001)])
[ERROR]Noop. We cannot kill blocked threads. Hopefully, everything will die on SIGTERM.
Traceback (most recent call last):
File "/cm/shared/apps/FALCON_12_09_2018/bin//fc_quiver.py", line 11, in <module>
load_entry_point('falcon-unzip==1.1.2', 'console_scripts', 'fc_quiver.py')()
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_unzip/mains/start_unzip.py", line 29, in main
unzip.run(**vars(args))
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_unzip/unzip.py", line 126, in run
unzip_all(config)
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_unzip/unzip.py", line 28, in unzip_all
tasks_unzip.run_workflow(wf, config, rule_writer)
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_unzip/tasks/unzip.py", line 708, in run_workflow
job_dict=config['job.step.unzip.quiver'],
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_kit/pype.py", line 192, in gen_parallel_tasks
wf.refreshTargets()
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 277, in refreshTargets
self._refreshTargets(updateFreq, exitOnFailure)
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 361, in _refreshTargets
raise Exception(msg)
Exception: Some tasks are recently_done but not satisfied: set([Node(4-quiver/quiver-run/000153Fp01_00001)])
make: *** [quiver] Error 1
Here the "4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash.stderr" file :
executable=${PYPEFLOW_JOB_START_SCRIPT}
+ executable=/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash
timeout=${PYPEFLOW_JOB_START_TIMEOUT:-60} # wait 60s by default
+ timeout=60
# Wait up to timeout seconds for the executable to become "executable",
# then exec.
#timeleft = int(timeout)
while [[ ! -x "${executable}" ]]; do
if [[ "${timeout}" == "0" ]]; then
echo "timed out waiting for (${executable})"
exit 77
fi
echo "not executable: '${executable}', waiting ${timeout}s"
sleep 1
timeout=$((timeout-1))
done
+ [[ ! -x /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash ]]
/bin/bash ${executable}
+ /bin/bash /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash
+ '[' '!' -d /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001 ']'
+ cd /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001
+ eval '/bin/bash run.sh'
++ /bin/bash run.sh
export PATH=$PATH:/bin
+ export PATH=/cm/shared/apps/FALCON_12_09_2018/bin/:/cm/shared/apps/Perl_conda/bin:/cm/shared/apps/slurm/14.03.0/sbin:/cm/shared/apps/slurm/14.03.0/bin:/cm/local/apps/cluster-tools/bin:/cm/local/apps/cmd/sbi
n:/cm/local/apps/cmd/bin:/cm/shared/apps/cmgui:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/sbin:/root/bin:/bin
+ PATH=/cm/shared/apps/FALCON_12_09_2018/bin/:/cm/shared/apps/Perl_conda/bin:/cm/shared/apps/slurm/14.03.0/sbin:/cm/shared/apps/slurm/14.03.0/bin:/cm/local/apps/cluster-tools/bin:/cm/local/apps/cmd/sbin:/cm/l
ocal/apps/cmd/bin:/cm/shared/apps/cmgui:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/sbin:/root/bin:/bin
cd /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001
+ cd /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001
/bin/bash task.sh
+ /bin/bash task.sh
pypeflow 2.0.4
2018-09-25 01:34:24,412 - root - DEBUG - Running "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/task.js
on"
2018-09-25 01:34:24,414 - root - DEBUG - Checking existence of '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/task.json' with timeout=30
2018-09-25 01:34:24,414 - root - DEBUG - Loading JSON from '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/task.json'
2018-09-25 01:34:24,415 - root - DEBUG - {u'bash_template_fn': u'template.sh',
u'inputs': {u'bash_template': u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-split/bash-template.sh',
u'units_of_work': u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-chunks/000153Fp01_00001/some-units-of-work.json'},
u'outputs': {u'results': u'results.json'},
u'parameters': {u'pypeflow_mb': 4000, u'pypeflow_nproc': u'5'}}
2018-09-25 01:34:24,415 - root - WARNING - CD: '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001' <- '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
2018-09-25 01:34:24,415 - root - DEBUG - Checking existence of u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-chunks/000153Fp01_00001/some-units-of-work.json' with timeout=30
2018-09-25 01:34:24,416 - root - DEBUG - Checking existence of u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-split/bash-template.sh' with timeout=30
2018-09-25 01:34:24,416 - root - DEBUG - Checking existence of u'template.sh' with timeout=30
2018-09-25 01:34:24,416 - root - WARNING - CD: '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001' <- '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
2018-09-25 01:34:24,417 - root - INFO - $('/bin/bash user_script.sh')
hostname
+ hostname
pwd
+ pwd
date
+ date
# Substitution will be similar to snakemake "shell".
python -m falcon_kit.mains.generic_run_units_of_work --nproc=5 --units-of-work-fn=/data2/avelt/Assembly_amurensis/4-quiver/quiver-chunks/000153Fp01_00001/some-units-of-work.json --bash-template-fn=/data2/avel
t/Assembly_amurensis/4-quiver/quiver-split/bash-template.sh --results-fn=results.json
+ python -m falcon_kit.mains.generic_run_units_of_work --nproc=5 --units-of-work-fn=/data2/avelt/Assembly_amurensis/4-quiver/quiver-chunks/000153Fp01_00001/some-units-of-work.json --bash-template-fn=/data2/av
elt/Assembly_amurensis/4-quiver/quiver-split/bash-template.sh --results-fn=results.json
falcon-kit 1.2.2
pypeflow 2.0.4
INFO:root:INPUT:{u'ref_fasta': u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-split/./refs/000153Fp01_00001/ref.fa', u'read_bam': u'/data2/avelt/Assembly_amurensis/4-quiver/segregate-run/segr1905/segregate
d/000153Fp01_00001/000153Fp01_00001.bam', u'ctg_type': u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-split/./refs/000153Fp01_00001/ctg_type'}
INFO:root:OUTPUT:{u'cns_fasta': u'cns.fasta.gz', u'cns_vcf': u'cns.vcf', u'job_done': u'quiver_done', u'ctg_type_again': u'ctg_type', u'cns_fastq': u'cns.fastq.gz'}
INFO:root:PARAMS:{'pypeflow_nproc': '5', u'ctg_id': u'000153Fp01_00001'}
INFO:root:$('rm -rf uow-00')
WARNING:root:CD: 'uow-00' <- '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
INFO:root:$('/bin/bash user_script.sh')
hostname
+ hostname
pwd
+ pwd
date
+ date
set -vex
+ set -vex
trap 'touch quiver_done.exit' EXIT
+ trap 'touch quiver_done.exit' EXIT
hostname
+ hostname
date
+ date
samtools faidx /data2/avelt/Assembly_amurensis/4-quiver/quiver-split/./refs/000153Fp01_00001/ref.fa
+ samtools faidx /data2/avelt/Assembly_amurensis/4-quiver/quiver-split/./refs/000153Fp01_00001/ref.fa
[faidx] Could not build fai index /data2/avelt/Assembly_amurensis/4-quiver/quiver-split/./refs/000153Fp01_00001/ref.fa.fai
touch quiver_done.exit
+ touch quiver_done.exit
WARNING:root:Call '/bin/bash user_script.sh' returned 256.
WARNING:root:CD: 'uow-00' -> '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
Traceback (most recent call last):
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_kit/mains/generic_run_units_of_work.py", line 115, in <module>
main()
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_kit/mains/generic_run_units_of_work.py", line 111, in main
run(**vars(args))
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_kit/mains/generic_run_units_of_work.py", line 64, in run
pypeflow.do_task.run_bash(script, inputs, outputs, params)
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 178, in run_bash
util.system(cmd)
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/io.py", line 29, in syscall
raise Exception(msg)
Exception: Call '/bin/bash user_script.sh' returned 256.
2018-09-25 01:34:24,870 - root - WARNING - Call '/bin/bash user_script.sh' returned 256.
2018-09-25 01:34:24,870 - root - WARNING - CD: '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001' -> '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
2018-09-25 01:34:24,870 - root - WARNING - CD: '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001' -> '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
2018-09-25 01:34:24,870 - root - CRITICAL - Error in /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py with args="{'json_fn': '/data2/avelt/Assembly_amurensis/4-quiver/quiver-r
un/000153Fp01_00001/task.json',\n 'timeout': 30,\n 'tmpdir': None}"
Traceback (most recent call last):
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 246, in <module>
main()
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 238, in main
run(**vars(parsed_args))
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 232, in run
run_cfg_in_tmpdir(cfg, tmpdir)
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 208, in run_cfg_in_tmpdir
run_bash(bash_template, myinputs, myoutputs, parameters)
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 178, in run_bash
util.system(cmd)
File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/io.py", line 29, in syscall
raise Exception(msg)
Exception: Call '/bin/bash user_script.sh' returned 256.
+++ pwd
++ echo 'FAILURE. Running top in /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001 (If you see -terminal database is inaccessible- you are using the python bin-wrapper, so you will not get
diagnostic info. No big deal. This process is crashing anyway.)'
++ rm -f top.txt
++ which python
++ which top
++ env -u LD_LIBRARY_PATH top -b -n 1
++ env -u LD_LIBRARY_PATH top -b -n 1
++ pstree -apl
real 0m1.432s
user 0m0.327s
sys 0m0.164s
+ finish
+ echo 'finish code: 1'
Hi Amandine,
Glad the Unzip portion worked for you.
In the polishing, it looks like you may have encountered a real bug.
I'm going to close this issue here as I believe the original issue was addressed.
In order to make sure your polishing error this doesn't get overlooked, could you please submit this issue to the official Pacbio Bioconda github issue tracker here: https://github.com/PacificBiosciences/pbbioconda/issues
All PacBio Bioconda related issues, including pb-assembly are being tracked there.
FYI: In the future, this pb-assembly repo has moved to the official PacificBiosciences github account here: https://github.com/PacificBiosciences/pb-assembly
Thanks!
Thank you very much for your help. Best, Amandine
Hi,
I have a new question about the new version of Falcon, pb-assembly.
The first step of reads correction is much faster, I don't know if the configuration that I recovered allows me to accelerate this step. On the other hand, it's been 4 days since the unzip stage runs and I feel that it's an infinite loop, here are the last lines of the log. Do you find it normal?
It's weird because my log is still updated an hour ago, with the last sentence and I don't know how many iterations it will still have to do to reach a result.
Best, Amandine