gconcepcion / pb-assembly

PacBio Assembly Tools Suite
1 stars 0 forks source link

FALCOn very slow. #3

Closed a-velt closed 6 years ago

a-velt commented 6 years ago

Hi,

I have a new question about the new version of Falcon, pb-assembly.

The first step of reads correction is much faster, I don't know if the configuration that I recovered allows me to accelerate this step. On the other hand, it's been 4 days since the unzip stage runs and I feel that it's an infinite loop, here are the last lines of the log. Do you find it normal?

It's weird because my log is still updated an hour ago, with the last sentence and I don't know how many iterations it will still have to do to reach a result.

Best, Amandine

set([Node(3-unzip/0-phasing/phasing-chunks/002449F)])
[INFO]Num satisfied in this iteration: 1
[INFO]Num still unsatisfied: 5151
[INFO]About to submit: Node(3-unzip/0-phasing/000309F)
[INFO]Popen: 'bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/0-phasing/000309F/run-P5ed16ea251015d.bash.stdout 2>| /data2/avelt/Assembly_amurensis/3-unzip/0-phasing/000309F/run-P5ed16ea251015d.bash.stderr'
[INFO](slept for another 55.3s -- another 90 loop iterations)
[INFO](slept for another 527.3s -- another 91 loop iterations)
[INFO](slept for another 920.0s -- another 92 loop iterations)
[INFO](slept for another 930.0s -- another 93 loop iterations)
[INFO](slept for another 940.0s -- another 94 loop iterations)
[INFO](slept for another 950.0s -- another 95 loop iterations)
[INFO](slept for another 960.0s -- another 96 loop iterations)
[INFO](slept for another 970.0s -- another 97 loop iterations)
[INFO](slept for another 980.0s -- another 98 loop iterations)
[INFO](slept for another 990.0s -- another 99 loop iterations)
[INFO](slept for another 1000.0s -- another 100 loop iterations)
[INFO](slept for another 1010.0s -- another 101 loop iterations)
[INFO](slept for another 1020.0s -- another 102 loop iterations)
[INFO](slept for another 1030.0s -- another 103 loop iterations)
[INFO](slept for another 1040.0s -- another 104 loop iterations)
[INFO](slept for another 1050.0s -- another 105 loop iterations)
[INFO](slept for another 1060.0s -- another 106 loop iterations)
[INFO](slept for another 1070.0s -- another 107 loop iterations)
[INFO](slept for another 1080.0s -- another 108 loop iterations)
[INFO](slept for another 1090.0s -- another 109 loop iterations)
[INFO](slept for another 1100.0s -- another 110 loop iterations)
[INFO](slept for another 1110.0s -- another 111 loop iterations)
[INFO](slept for another 1120.0s -- another 112 loop iterations)
[INFO](slept for another 1130.0s -- another 113 loop iterations)
[INFO](slept for another 1140.0s -- another 114 loop iterations)
[INFO](slept for another 1150.0s -- another 115 loop iterations)
[INFO](slept for another 1160.0s -- another 116 loop iterations)
[INFO](slept for another 1170.0s -- another 117 loop iterations)
[INFO](slept for another 1180.0s -- another 118 loop iterations)
[INFO](slept for another 1190.0s -- another 119 loop iterations)
[INFO](slept for another 1200.0s -- another 120 loop iterations)
[INFO](slept for another 1210.0s -- another 121 loop iterations)
[INFO](slept for another 1220.0s -- another 122 loop iterations)
[INFO](slept for another 1230.0s -- another 123 loop iterations)
[INFO](slept for another 1240.0s -- another 124 loop iterations)
[INFO](slept for another 1250.0s -- another 125 loop iterations)
[INFO](slept for another 1260.0s -- another 126 loop iterations)
[INFO](slept for another 1270.0s -- another 127 loop iterations)
[INFO](slept for another 1280.0s -- another 128 loop iterations)
[INFO](slept for another 1290.0s -- another 129 loop iterations)
[INFO](slept for another 1300.0s -- another 130 loop iterations)
[INFO](slept for another 1310.0s -- another 131 loop iterations)
[INFO](slept for another 1320.0s -- another 132 loop iterations)
[INFO](slept for another 1330.0s -- another 133 loop iterations)
[INFO](slept for another 1340.0s -- another 134 loop iterations)
[INFO](slept for another 1350.0s -- another 135 loop iterations)
[INFO](slept for another 1360.0s -- another 136 loop iterations)
[INFO](slept for another 1370.0s -- another 137 loop iterations)
[INFO](slept for another 1380.0s -- another 138 loop iterations)
[INFO](slept for another 1390.0s -- another 139 loop iterations)
[INFO](slept for another 1400.0s -- another 140 loop iterations)
[INFO](slept for another 1410.0s -- another 141 loop iterations)
[INFO](slept for another 1420.0s -- another 142 loop iterations)
[INFO](slept for another 1430.0s -- another 143 loop iterations)
[INFO](slept for another 1440.0s -- another 144 loop iterations)
[INFO](slept for another 1450.0s -- another 145 loop iterations)
[INFO](slept for another 1460.0s -- another 146 loop iterations)
[INFO](slept for another 1470.0s -- another 147 loop iterations)
[INFO](slept for another 1480.0s -- another 148 loop iterations)
[INFO](slept for another 1490.0s -- another 149 loop iterations)
[INFO](slept for another 1500.0s -- another 150 loop iterations)
[INFO](slept for another 1510.0s -- another 151 loop iterations)
[INFO](slept for another 1520.0s -- another 152 loop iterations)
[INFO](slept for another 1530.0s -- another 153 loop iterations)
[INFO](slept for another 1540.0s -- another 154 loop iterations)
[INFO](slept for another 1550.0s -- another 155 loop iterations)
[INFO](slept for another 1560.0s -- another 156 loop iterations)
[INFO](slept for another 1570.0s -- another 157 loop iterations)
[INFO](slept for another 1580.0s -- another 158 loop iterations)
[INFO](slept for another 1590.0s -- another 159 loop iterations)
[INFO](slept for another 1600.0s -- another 160 loop iterations)
[INFO](slept for another 1610.0s -- another 161 loop iterations)
[INFO](slept for another 1620.0s -- another 162 loop iterations)
[INFO](slept for another 1630.0s -- another 163 loop iterations)
[INFO](slept for another 1640.0s -- another 164 loop iterations)
[INFO](slept for another 1650.0s -- another 165 loop iterations)
[INFO](slept for another 1660.0s -- another 166 loop iterations)
[INFO](slept for another 1670.0s -- another 167 loop iterations)
[INFO](slept for another 1680.0s -- another 168 loop iterations)
[INFO](slept for another 1690.0s -- another 169 loop iterations)
[INFO](slept for another 1700.0s -- another 170 loop iterations)
[INFO](slept for another 1710.0s -- another 171 loop iterations)
[INFO](slept for another 1720.0s -- another 172 loop iterations)
[INFO](slept for another 1730.0s -- another 173 loop iterations)
[INFO](slept for another 1740.0s -- another 174 loop iterations)
[INFO](slept for another 1750.0s -- another 175 loop iterations)
[INFO](slept for another 1760.0s -- another 176 loop iterations)
[INFO](slept for another 1770.0s -- another 177 loop iterations)
[INFO](slept for another 1780.0s -- another 178 loop iterations)
[INFO](slept for another 1790.0s -- another 179 loop iterations)
[INFO](slept for another 1800.0s -- another 180 loop iterations)
[INFO](slept for another 1810.0s -- another 181 loop iterations)
[INFO](slept for another 1820.0s -- another 182 loop iterations)
[INFO](slept for another 1830.0s -- another 183 loop iterations)
[INFO](slept for another 1840.0s -- another 184 loop iterations)
[INFO](slept for another 1850.0s -- another 185 loop iterations)
[INFO](slept for another 1860.0s -- another 186 loop iterations)
[INFO](slept for another 1870.0s -- another 187 loop iterations)
[INFO](slept for another 1880.0s -- another 188 loop iterations)
[INFO](slept for another 1890.0s -- another 189 loop iterations)
[INFO](slept for another 1900.0s -- another 190 loop iterations)
[INFO](slept for another 1910.0s -- another 191 loop iterations)
[INFO](slept for another 1920.0s -- another 192 loop iterations)
[INFO](slept for another 1930.0s -- another 193 loop iterations)
[INFO](slept for another 1940.0s -- another 194 loop iterations)
[INFO](slept for another 1950.0s -- another 195 loop iterations)
[INFO](slept for another 1960.0s -- another 196 loop iterations)
[INFO](slept for another 1970.0s -- another 197 loop iterations)
[INFO](slept for another 1980.0s -- another 198 loop iterations)
[INFO](slept for another 1990.0s -- another 199 loop iterations)
[INFO](slept for another 2000.0s -- another 200 loop iterations)
[INFO](slept for another 2010.0s -- another 201 loop iterations)
[INFO](slept for another 2020.0s -- another 202 loop iterations)
[INFO](slept for another 2030.0s -- another 203 loop iterations)
[INFO](slept for another 2040.0s -- another 204 loop iterations)
[INFO](slept for another 2050.0s -- another 205 loop iterations)
[INFO](slept for another 2060.0s -- another 206 loop iterations)
[INFO](slept for another 2070.0s -- another 207 loop iterations)
[INFO](slept for another 2080.0s -- another 208 loop iterations)
[INFO](slept for another 2090.0s -- another 209 loop iterations)
[INFO](slept for another 2100.0s -- another 210 loop iterations)
[INFO](slept for another 2110.0s -- another 211 loop iterations)
[INFO](slept for another 2120.0s -- another 212 loop iterations)
[INFO](slept for another 2130.0s -- another 213 loop iterations)
[INFO](slept for another 2140.0s -- another 214 loop iterations)
[INFO](slept for another 2150.0s -- another 215 loop iterations)
[INFO](slept for another 2160.0s -- another 216 loop iterations)
[INFO](slept for another 2170.0s -- another 217 loop iterations)
[INFO](slept for another 2180.0s -- another 218 loop iterations)
[INFO](slept for another 2190.0s -- another 219 loop iterations)
[INFO](slept for another 2200.0s -- another 220 loop iterations)
[INFO](slept for another 2210.0s -- another 221 loop iterations)
[INFO](slept for another 2220.0s -- another 222 loop iterations)
[INFO](slept for another 2230.0s -- another 223 loop iterations)
[INFO](slept for another 2240.0s -- another 224 loop iterations)
[INFO](slept for another 2250.0s -- another 225 loop iterations)
[INFO](slept for another 2260.0s -- another 226 loop iterations)
[INFO](slept for another 2270.0s -- another 227 loop iterations)
[INFO](slept for another 2280.0s -- another 228 loop iterations)
[INFO](slept for another 2290.0s -- another 229 loop iterations)
[INFO](slept for another 2300.0s -- another 230 loop iterations)
[INFO](slept for another 2310.0s -- another 231 loop iterations)
[INFO](slept for another 2320.0s -- another 232 loop iterations)
[INFO](slept for another 2330.0s -- another 233 loop iterations)
[INFO](slept for another 2340.0s -- another 234 loop iterations)
[INFO](slept for another 2350.0s -- another 235 loop iterations)
[INFO](slept for another 2360.0s -- another 236 loop iterations)
[INFO](slept for another 2370.0s -- another 237 loop iterations)
[INFO](slept for another 2380.0s -- another 238 loop iterations)
[INFO](slept for another 2390.0s -- another 239 loop iterations)
[INFO](slept for another 2400.0s -- another 240 loop iterations)
[INFO](slept for another 2410.0s -- another 241 loop iterations)
[INFO](slept for another 2420.0s -- another 242 loop iterations)
[INFO](slept for another 2430.0s -- another 243 loop iterations)
[INFO](slept for another 2440.0s -- another 244 loop iterations)
[INFO](slept for another 2450.0s -- another 245 loop iterations)
[INFO](slept for another 2460.0s -- another 246 loop iterations)
[INFO](slept for another 2470.0s -- another 247 loop iterations)
[INFO](slept for another 2480.0s -- another 248 loop iterations)
[INFO](slept for another 2490.0s -- another 249 loop iterations)
[INFO](slept for another 2500.0s -- another 250 loop iterations)
[INFO](slept for another 2510.0s -- another 251 loop iterations)
[INFO](slept for another 2520.0s -- another 252 loop iterations)
[INFO](slept for another 2530.0s -- another 253 loop iterations)
[INFO](slept for another 2540.0s -- another 254 loop iterations)
[INFO](slept for another 2550.0s -- another 255 loop iterations)
[INFO](slept for another 2560.0s -- another 256 loop iterations)
[INFO](slept for another 2570.0s -- another 257 loop iterations)
[INFO](slept for another 2580.0s -- another 258 loop iterations)
[INFO](slept for another 2590.0s -- another 259 loop iterations)
[INFO](slept for another 2600.0s -- another 260 loop iterations)
[INFO](slept for another 2610.0s -- another 261 loop iterations)
[INFO](slept for another 2620.0s -- another 262 loop iterations)
[INFO](slept for another 2630.0s -- another 263 loop iterations)
[INFO](slept for another 2640.0s -- another 264 loop iterations)
[INFO](slept for another 2650.0s -- another 265 loop iterations)
[INFO](slept for another 2660.0s -- another 266 loop iterations)
[INFO](slept for another 2670.0s -- another 267 loop iterations)
[INFO](slept for another 2680.0s -- another 268 loop iterations)
[INFO](slept for another 2690.0s -- another 269 loop iterations)
[INFO](slept for another 2700.0s -- another 270 loop iterations)
[INFO](slept for another 2710.0s -- another 271 loop iterations)
[INFO](slept for another 2720.0s -- another 272 loop iterations)
gconcepcion commented 6 years ago

Are you running this locally?

What does top show? Is there is a process running and/or consuming memory?

a-velt commented 6 years ago

Yes I run this locally, but I privatize a complete computation node for the analysis. image

In the configuration file I am careful not to use more than 32 CPUs, but apparently 36 are used. And the node has only 32, which poses perhaps problem?

a-velt commented 6 years ago

I reduced the number of CPUs to be used,and run FALCON. See the log :

############### RUN OF UNZIP #############
fc_unzip.py fc_unzip.cfg
falcon-unzip 1.1.2
falcon-kit 1.2.2
pypeflow 2.0.4
[INFO]Setup logging from file "None".
[INFO]Using config=
{'General': {'max_n_open_files': '900000'},
 'Unzip': {'input_bam_fofn': 'input_bam.fofn', 'input_fofn': 'input.fofn'},
 'job.defaults': {'NPROC': '3',
                  'job_type': 'string',
                  'njobs': '6',
                  'pwatcher_type': 'blocking',
                  'submit': 'bash -C ${CMD} >| ${STDOUT_FILE} 2>| ${STDERR_FILE}',
                  'use_tmpdir': False},
 'job.step.unzip.blasr_aln': {'NPROC': '10', 'njobs': '2'},
 'job.step.unzip.hasm': {'NPROC': '20', 'njobs': '1'},
 'job.step.unzip.phasing': {'NPROC': '2', 'njobs': '10'},
 'job.step.unzip.quiver': {'NPROC': '10', 'njobs': '2'},
 'job.step.unzip.track_reads': {'NPROC': '20', 'njobs': '1'},
 'max_n_open_files': '900000'}
[INFO]PATH=/cm/shared/apps/FALCON_12_09_2018/bin/:/cm/shared/apps/Perl_conda/bin:/cm/shared/apps/slurm/14.03.0/sbin:/cm/shared/apps/slurm/14.03.0/bin:/cm/local/apps/cluster-tools/bin:/cm/local/apps/cmd/sbin:/cm/local/apps/c
md/bin:/cm/shared/apps/cmgui:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/sbin:/root/bin
[INFO]$('which which')
/usr/bin/which
[INFO]$('which blasr')
/cm/shared/apps/FALCON_12_09_2018/bin/blasr
[INFO]$('which samtools')
/cm/shared/apps/FALCON_12_09_2018/bin/samtools
[INFO]$('which pbalign')
/cm/shared/apps/FALCON_12_09_2018/bin/pbalign
[INFO]$('which variantCaller')
/cm/shared/apps/FALCON_12_09_2018/bin/variantCaller
[INFO]$('which minimap2')
/cm/shared/apps/FALCON_12_09_2018/bin/minimap2
[INFO]$('which nucmer')
/cm/shared/apps/FALCON_12_09_2018/bin/nucmer
[INFO]$('which show-coords')
/cm/shared/apps/FALCON_12_09_2018/bin/show-coords
[INFO]$('which fc_rr_hctg_track2.exe')
/cm/shared/apps/FALCON_12_09_2018/bin/fc_rr_hctg_track2.exe
[INFO]$('nucmer --version')
nucmer
NUCmer (NUCleotide MUMmer) version 3.1

[INFO]$('minimap2 --version')
2.12-r827
[INFO]$ show-coords -h >
[INFO]$ samtools >
[INFO]samtools ['1', '9'] is >= 1.3
[WARNING]CD: '0-rawreads' <- '/data2/avelt/Assembly_amurensis'
[WARNING]CD: '0-rawreads' -> '/data2/avelt/Assembly_amurensis'
[WARNING]CD: '1-preads_ovl' <- '/data2/avelt/Assembly_amurensis'
[WARNING]CD: '1-preads_ovl' -> '/data2/avelt/Assembly_amurensis'
[INFO]Falcon directories up-to-date.
[INFO]In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.blocking' from '/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/blocking.pyc'>
[INFO]job_type='string', (default)job_defaults={'pwatcher_type': 'blocking', 'use_tmpdir': False, 'job_type': 'string', 'submit': 'bash -C ${CMD} >| ${STDOUT_FILE} 2>| ${STDERR_FILE}', 'NPROC': '3', 'njobs': '6'}, use_tmpdi
r=False, squash=False, job_name_style=0
[INFO]Setting max_jobs to 6; was None
[INFO]config=
 {'Unzip': {'input_bam_fofn': 'input_bam.fofn', 'input_fofn': 'input.fofn'}, 'job.step.unzip.blasr_aln': {'njobs': '2', 'NPROC': '10'}, 'max_n_open_files': '900000', 'job.step.unzip.hasm': {'njobs': '1', 'NPROC': '20'}, 'Ge
neral': {'max_n_open_files': '900000'}, 'job.step.unzip.track_reads': {'njobs': '1', 'NPROC': '20'}, 'job.step.unzip.phasing': {'njobs': '10', 'NPROC': '2'}, 'job.step.unzip.quiver': {'njobs': '2', 'NPROC': '10'}, 'job.defa
ults': {'pwatcher_type': 'blocking', 'use_tmpdir': False, 'job_type': 'string', 'submit': 'bash -C ${CMD} >| ${STDOUT_FILE} 2>| ${STDERR_FILE}', 'NPROC': '3', 'njobs': '6'}}
[INFO]Num unsatisfied: 5, graph: 5
[INFO]About to submit: Node(3-unzip/reads/dump_rawread_ids)
[INFO]About to submit: Node(3-unzip/reads/dump_pread_ids)
[INFO]Popen: '/bin/bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/reads/dump_rawread_ids/run-P353f203c57debb.bash.stdout 2>| /dat
a2/avelt/Assembly_amurensis/3-unzip/reads/dump_rawread_ids/run-P353f203c57debb.bash.stderr'
[INFO]Popen: '/bin/bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/reads/dump_pread_ids/run-Pf37e6d7a5ecef7.bash.stdout 2>| /data2
/avelt/Assembly_amurensis/3-unzip/reads/dump_pread_ids/run-Pf37e6d7a5ecef7.bash.stderr'
[INFO](slept for another 0.0s -- another 1 loop iterations)
[INFO](slept for another 0.3s -- another 2 loop iterations)
[INFO]recently_satisfied:
set([Node(3-unzip/reads/dump_pread_ids)])
[INFO]Num satisfied in this iteration: 1
[INFO]Num still unsatisfied: 4
[INFO](slept for another 0.4s -- another 3 loop iterations)
[INFO](slept for another 1.4s -- another 4 loop iterations)
[INFO]recently_satisfied:
set([Node(3-unzip/reads/dump_rawread_ids)])
[INFO]Num satisfied in this iteration: 1
[INFO]Num still unsatisfied: 3
[INFO]About to submit: Node(3-unzip/reads/get_read_ctg_map)
[INFO]Popen: '/bin/bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/reads/get_read_ctg_map/run-Pcec393c558f9bf.bash.stdout 2>| /dat
a2/avelt/Assembly_amurensis/3-unzip/reads/get_read_ctg_map/run-Pcec393c558f9bf.bash.stderr'
[INFO](slept for another 2.2s -- another 5 loop iterations)
[INFO](slept for another 2.7s -- another 6 loop iterations)
[INFO]recently_satisfied:
set([Node(3-unzip/reads/get_read_ctg_map)])
[INFO]Num satisfied in this iteration: 1
[INFO]Num still unsatisfied: 2
[INFO]About to submit: Node(3-unzip/reads)
[INFO]Popen: 'bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/reads/run-Pa39f70dc249148.bash.stdout 2>| /data2/avelt/Assembly_amur
ensis/3-unzip/reads/run-Pa39f70dc249148.bash.stderr'
[INFO](slept for another 3.3s -- another 7 loop iterations)
[INFO](slept for another 6.0s -- another 8 loop iterations)
[INFO](slept for another 14.4s -- another 9 loop iterations)
[INFO](slept for another 25.5s -- another 10 loop iterations)
[INFO](slept for another 39.6s -- another 11 loop iterations)
[INFO](slept for another 57.0s -- another 12 loop iterations)
[INFO](slept for another 78.0s -- another 13 loop iterations)
[INFO](slept for another 102.9s -- another 14 loop iterations)
[INFO](slept for another 132.0s -- another 15 loop iterations)

I defined that the maximum number of cpus to use is 20 but when I look at htop : image

It seems to me that more than 20 CPUs are used, which creates the iterations we see in the log.

a-velt commented 6 years ago

I reduced the number of CPUs, falcon_unzip doesn't perform. This is a real problem for me because I received new Pacbio data. The version I had of Falcon didn't work on it (or doesn't work anymore) and I have to give results. Do you have any idea of ​​the problem? I have to run Falcon in local because we don't have SGE but Slurm. And I run FALCON on a node that I fully reserve for this analysis, with 32CPUs and 386GB of RAM. I don't understand the problem. Thanks a lot for your help.

Here my configuration file for Falcon-unzip :

[General] max_n_open_files = 900000

[Unzip]

input_fofn= input.fofn input_bam_fofn= input_bam.fofn

[job.defaults] NPROC=2 njobs=5 job_type = local

pwatcher_type = blocking job_type = string submit = bash -C ${CMD} >| ${STDOUT_FILE} 2>| ${STDERR_FILE}

njobs=6 NPROC=2 [job.step.unzip.track_reads] njobs=1 NPROC=15 [job.step.unzip.blasr_aln] njobs=2 NPROC=5 [job.step.unzip.phasing] njobs=5 NPROC=2 [job.step.unzip.hasm] njobs=1 NPROC=15 [job.step.unzip.quiver] njobs=2 NPROC=5

And always the same problem of "loop iterations":

[INFO]Popen: '/bin/bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/0-phasing/phasing-chunks/001571F/run-P4709c2318a c341.bash.stdout 2>| /data2/avelt/Assembly_amurensis/3-unzip/0-phasing/phasing-chunks/001571F/run-P4709c2318ac341.bash.stderr' [INFO]recently_satisfied: set([Node(3-unzip/0-phasing/phasing-chunks/001571F)]) [INFO]Num satisfied in this iteration: 1 [INFO]Num still unsatisfied: 5246 [INFO]About to submit: Node(3-unzip/0-phasing/000632F) [INFO]Popen: 'bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/3-unzip/0-phasing/000632F/run-P47db72a92e596c.bash.stdout 2>| /data2/avelt/Assembly_amurensis/3-unzip/0-phasing/000632F/run-P47db72a92e596c.bash.stderr' [INFO](slept for another 191.1s -- another 71 loop iterations) [INFO](slept for another 646.0s -- another 72 loop iterations) [INFO](slept for another 730.0s -- another 73 loop iterations) [INFO](slept for another 740.0s -- another 74 loop iterations) [INFO](slept for another 750.0s -- another 75 loop iterations) [INFO](slept for another 760.0s -- another 76 loop iterations) [INFO](slept for another 770.0s -- another 77 loop iterations) [INFO](slept for another 780.0s -- another 78 loop iterations) [INFO](slept for another 790.0s -- another 79 loop iterations) [INFO](slept for another 800.0s -- another 80 loop iterations) [INFO](slept for another 810.0s -- another 81 loop iterations) [INFO](slept for another 820.0s -- another 82 loop iterations) [INFO](slept for another 830.0s -- another 83 loop iterations) [INFO](slept for another 840.0s -- another 84 loop iterations) [INFO](slept for another 850.0s -- another 85 loop iterations) [INFO](slept for another 860.0s -- another 86 loop iterations) [INFO](slept for another 870.0s -- another 87 loop iterations) [INFO](slept for another 880.0s -- another 88 loop iterations) [INFO](slept for another 890.0s -- another 89 loop iterations) [INFO](slept for another 900.0s -- another 90 loop iterations) [INFO](slept for another 910.0s -- another 91 loop iterations) [INFO](slept for another 920.0s -- another 92 loop iterations) [INFO](slept for another 930.0s -- another 93 loop iterations) [INFO](slept for another 940.0s -- another 94 loop iterations) [INFO](slept for another 950.0s -- another 95 loop iterations) [INFO](slept for another 960.0s -- another 96 loop iterations) [INFO](slept for another 970.0s -- another 97 loop iterations) [INFO](slept for another 980.0s -- another 98 loop iterations) [INFO](slept for another 990.0s -- another 99 loop iterations) [INFO](slept for another 1000.0s -- another 100 loop iterations) [INFO](slept for another 1010.0s -- another 101 loop iterations) [INFO](slept for another 1020.0s -- another 102 loop iterations) [INFO](slept for another 1030.0s -- another 103 loop iterations) [INFO](slept for another 1040.0s -- another 104 loop iterations) [INFO](slept for another 1050.0s -- another 105 loop iterations) [INFO](slept for another 1060.0s -- another 106 loop iterations) [INFO](slept for another 1070.0s -- another 107 loop iterations) [INFO](slept for another 1080.0s -- another 108 loop iterations) [INFO](slept for another 1090.0s -- another 109 loop iterations) [INFO](slept for another 1100.0s -- another 110 loop iterations) [INFO](slept for another 1110.0s -- another 111 loop iterations) [INFO](slept for another 1120.0s -- another 112 loop iterations) [INFO](slept for another 1130.0s -- another 113 loop iterations) [INFO](slept for another 1140.0s -- another 114 loop iterations) [INFO](slept for another 1150.0s -- another 115 loop iterations) [INFO](slept for another 1160.0s -- another 116 loop iterations) [INFO](slept for another 1170.0s -- another 117 loop iterations) [INFO](slept for another 1180.0s -- another 118 loop iterations) [INFO](slept for another 1190.0s -- another 119 loop iterations) [INFO](slept for another 1200.0s -- another 120 loop iterations) [INFO](slept for another 1210.0s -- another 121 loop iterations) [INFO](slept for another 1220.0s -- another 122 loop iterations) [INFO](slept for another 1230.0s -- another 123 loop iterations) [INFO](slept for another 1240.0s -- another 124 loop iterations) [INFO](slept for another 1250.0s -- another 125 loop iterations) [INFO](slept for another 1260.0s -- another 126 loop iterations) [INFO](slept for another 1270.0s -- another 127 loop iterations) [INFO](slept for another 1280.0s -- another 128 loop iterations) [INFO](slept for another 1290.0s -- another 129 loop iterations) [INFO](slept for another 1300.0s -- another 130 loop iterations) [INFO](slept for another 1310.0s -- another 131 loop iterations) [INFO](slept for another 1320.0s -- another 132 loop iterations) [INFO](slept for another 1330.0s -- another 133 loop iterations) [INFO](slept for another 1340.0s -- another 134 loop iterations) [INFO](slept for another 1350.0s -- another 135 loop iterations) [INFO](slept for another 1360.0s -- another 136 loop iterations) [INFO](slept for another 1370.0s -- another 137 loop iterations) [INFO](slept for another 1380.0s -- another 138 loop iterations) [INFO](slept for another 1390.0s -- another 139 loop iterations) [INFO](slept for another 1400.0s -- another 140 loop iterations) [INFO](slept for another 1410.0s -- another 141 loop iterations) [INFO](slept for another 1420.0s -- another 142 loop iterations) [INFO](slept for another 1430.0s -- another 143 loop iterations) [INFO](slept for another 1440.0s -- another 144 loop iterations) [INFO](slept for another 1450.0s -- another 145 loop iterations) [INFO](slept for another 1460.0s -- another 146 loop iterations) [INFO](slept for another 1470.0s -- another 147 loop iterations) [INFO](slept for another 1480.0s -- another 148 loop iterations) [INFO](slept for another 1490.0s -- another 149 loop iterations) [INFO](slept for another 1500.0s -- another 150 loop iterations) [INFO](slept for another 1510.0s -- another 151 loop iterations)

gconcepcion commented 6 years ago

I defined that the maximum number of cpus to use is 20 but when I look at htop : image

It seems to me that more than 20 CPUs are used, which creates the iterations we see in the log.

I don't see that 20 cpus are being used from htop output. What i'm really interested in is what processes are running; the MEM & CPU they are consuming - from top output, not just the header from htop

I reduced the number of CPUs, falcon_unzip doesn't perform. This is a real problem for me because I received new Pacbio data. The version I had of Falcon didn't work on it (or doesn't work anymore) and I have to give results. Do you have any idea of ​​the problem? I have to run Falcon in local because we don't have SGE but Slurm. And I run FALCON on a node that I fully reserve for this analysis, with 32CPUs and 386GB of RAM. I don't understand the problem. Thanks a lot for your help.

If the process is actually alive but taking a while, reducing the CPUs won't make it go any faster. Why can't you run your FALCON w/ the Slurm scheduler? It should work fine. See the documents for pb-assembly here and pypeFLOW here

a-velt commented 6 years ago

I changed of node to run the analysis and the "unzip step" worked. I think I have a problem with the node where I launched the analysis the first time. But no matter, it works :)

So, "3-unzip" works well, with the files "all_p_ctg.fa" and "all_h_ctg.fa" generated, but quiver doesn't work. I have the impression that it is having an error at the end. Have you ever met this?

Thank you very much in advance.

Here the folder 4-quiver. The last step that has worked seems to be quiver-run :

quiver

There are 2639 folders in quiver-run, so there are steps that have worked, but not all.

Here the log with the error :

 [INFO]Popen: 'bash -C /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pwatcher/mains/job_start.sh >| /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash
 .stdout 2>| /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash.stderr'
 [ERROR]Task Node(4-quiver/quiver-run/000153Fp01_00001) failed with exit-code=1
 [ERROR]Some tasks are recently_done but not satisfied: set([Node(4-quiver/quiver-run/000153Fp01_00001)])
 [ERROR]ready: set([Node(4-quiver/quiver-run/001238Fp01_00002), Node(4-quiver/quiver-run/000889Fp01), Node(4-quiver/quiver-run/000200Fp01), Node(4-quiver/quiver-run/002254Fp02), Node(4-quiver/quiver-run/001827
 Fp01), Node(4-quiver/quiver-run/000342Fp01), Node(4-quiver/quiver-run/000474Fp01), Node(4-quiver/quiver-run/000078Fp01_00001), Node(4-quiver/quiver-run/001592Fp01_00001), Node(4-quiver/quiver-run/001250Fp01),
  Node(4-quiver/quiver-run/000682Fp01), Node(4-quiver/quiver-run/001979Fp01), Node(4-quiver/quiver-run/001033Fp01_00001), Node(4-quiver/quiver-run/000477Fp01), Node(4-quiver/quiver-run/000495Fp01), Node(4-quiv
 er/quiver-run/002699Fp01), Node(4-quiver/quiver-run/000450Fp01_00001), Node(4-quiver/quiver-run/001441Fp01), Node(4-quiver/quiver-run/000209Fp01), Node(4-quiver/quiver-run/000584Fp01_00001), Node(4-quiver/qui
 ...................
 ver-run/000645Fp01), Node(4-quiver/quiver-run/000637Fp01), Node(4-quiver/quiver-run/000408Fp01_00003), Node(4-quiver/quiver-run/000176Fp01_00001), Node(4-quiver/quiver-run/002065Fp01), Node(4-quiver/quiver-ru
 n/000405Fp01_00002), Node(4-quiver/quiver-run/000675Fp01_00001), Node(4-quiver/quiver-run/000735Fp01_00001), Node(4-quiver/quiver-run/002196Fp02), Node(4-quiver/quiver-run/000743Fp01), Node(4-quiver/quiver-ru
 n/000559Fp01_00001), Node(4-quiver/quiver-run/000317Fp01_00002)])
         submitted: set([Node(4-quiver/quiver-run/000484Fp01_00001)])
 [ERROR]Noop. We cannot kill blocked threads. Hopefully, everything will die on SIGTERM.
 Traceback (most recent call last):
   File "/cm/shared/apps/FALCON_12_09_2018/bin//fc_quiver.py", line 11, in <module>
     load_entry_point('falcon-unzip==1.1.2', 'console_scripts', 'fc_quiver.py')()
   File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_unzip/mains/start_unzip.py", line 29, in main
     unzip.run(**vars(args))
   File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_unzip/unzip.py", line 126, in run
     unzip_all(config)
   File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_unzip/unzip.py", line 28, in unzip_all
     tasks_unzip.run_workflow(wf, config, rule_writer)
   File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_unzip/tasks/unzip.py", line 708, in run_workflow
    job_dict=config['job.step.unzip.quiver'],
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_kit/pype.py", line 192, in gen_parallel_tasks
     wf.refreshTargets()
   File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 277, in refreshTargets
     self._refreshTargets(updateFreq, exitOnFailure)
   File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 361, in _refreshTargets
     raise Exception(msg)
Exception: Some tasks are recently_done but not satisfied: set([Node(4-quiver/quiver-run/000153Fp01_00001)])
make: *** [quiver] Error 1

Here the "4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash.stderr" file :

executable=${PYPEFLOW_JOB_START_SCRIPT}
+ executable=/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash
timeout=${PYPEFLOW_JOB_START_TIMEOUT:-60} # wait 60s by default
+ timeout=60

# Wait up to timeout seconds for the executable to become "executable",
# then exec.
#timeleft = int(timeout)
while [[ ! -x "${executable}" ]]; do
    if [[ "${timeout}" == "0" ]]; then
        echo "timed out waiting for (${executable})"
        exit 77
    fi
    echo "not executable: '${executable}', waiting ${timeout}s"
    sleep 1
    timeout=$((timeout-1))
done
+ [[ ! -x /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash ]]

/bin/bash ${executable}
+ /bin/bash /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/run-P11ad2b39cd9e4b.bash
+ '[' '!' -d /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001 ']'
+ cd /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001
+ eval '/bin/bash run.sh'
++ /bin/bash run.sh
export PATH=$PATH:/bin
+ export PATH=/cm/shared/apps/FALCON_12_09_2018/bin/:/cm/shared/apps/Perl_conda/bin:/cm/shared/apps/slurm/14.03.0/sbin:/cm/shared/apps/slurm/14.03.0/bin:/cm/local/apps/cluster-tools/bin:/cm/local/apps/cmd/sbi
n:/cm/local/apps/cmd/bin:/cm/shared/apps/cmgui:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/sbin:/root/bin:/bin
+ PATH=/cm/shared/apps/FALCON_12_09_2018/bin/:/cm/shared/apps/Perl_conda/bin:/cm/shared/apps/slurm/14.03.0/sbin:/cm/shared/apps/slurm/14.03.0/bin:/cm/local/apps/cluster-tools/bin:/cm/local/apps/cmd/sbin:/cm/l
ocal/apps/cmd/bin:/cm/shared/apps/cmgui:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/sbin:/root/bin:/bin
cd /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001
+ cd /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001
/bin/bash task.sh
+ /bin/bash task.sh
pypeflow 2.0.4
2018-09-25 01:34:24,412 - root - DEBUG - Running "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/task.js
on"
2018-09-25 01:34:24,414 - root - DEBUG - Checking existence of '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/task.json' with timeout=30
2018-09-25 01:34:24,414 - root - DEBUG - Loading JSON from '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001/task.json'
2018-09-25 01:34:24,415 - root - DEBUG - {u'bash_template_fn': u'template.sh',
 u'inputs': {u'bash_template': u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-split/bash-template.sh',
             u'units_of_work': u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-chunks/000153Fp01_00001/some-units-of-work.json'},
 u'outputs': {u'results': u'results.json'},
 u'parameters': {u'pypeflow_mb': 4000, u'pypeflow_nproc': u'5'}}
2018-09-25 01:34:24,415 - root - WARNING - CD: '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001' <- '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
2018-09-25 01:34:24,415 - root - DEBUG - Checking existence of u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-chunks/000153Fp01_00001/some-units-of-work.json' with timeout=30
2018-09-25 01:34:24,416 - root - DEBUG - Checking existence of u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-split/bash-template.sh' with timeout=30
2018-09-25 01:34:24,416 - root - DEBUG - Checking existence of u'template.sh' with timeout=30
2018-09-25 01:34:24,416 - root - WARNING - CD: '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001' <- '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
2018-09-25 01:34:24,417 - root - INFO - $('/bin/bash user_script.sh')
hostname
+ hostname
pwd
+ pwd
date
+ date
# Substitution will be similar to snakemake "shell".
python -m falcon_kit.mains.generic_run_units_of_work --nproc=5 --units-of-work-fn=/data2/avelt/Assembly_amurensis/4-quiver/quiver-chunks/000153Fp01_00001/some-units-of-work.json --bash-template-fn=/data2/avel
t/Assembly_amurensis/4-quiver/quiver-split/bash-template.sh --results-fn=results.json
+ python -m falcon_kit.mains.generic_run_units_of_work --nproc=5 --units-of-work-fn=/data2/avelt/Assembly_amurensis/4-quiver/quiver-chunks/000153Fp01_00001/some-units-of-work.json --bash-template-fn=/data2/av
elt/Assembly_amurensis/4-quiver/quiver-split/bash-template.sh --results-fn=results.json
falcon-kit 1.2.2
pypeflow 2.0.4
INFO:root:INPUT:{u'ref_fasta': u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-split/./refs/000153Fp01_00001/ref.fa', u'read_bam': u'/data2/avelt/Assembly_amurensis/4-quiver/segregate-run/segr1905/segregate
d/000153Fp01_00001/000153Fp01_00001.bam', u'ctg_type': u'/data2/avelt/Assembly_amurensis/4-quiver/quiver-split/./refs/000153Fp01_00001/ctg_type'}
INFO:root:OUTPUT:{u'cns_fasta': u'cns.fasta.gz', u'cns_vcf': u'cns.vcf', u'job_done': u'quiver_done', u'ctg_type_again': u'ctg_type', u'cns_fastq': u'cns.fastq.gz'}
INFO:root:PARAMS:{'pypeflow_nproc': '5', u'ctg_id': u'000153Fp01_00001'}
INFO:root:$('rm -rf uow-00')
WARNING:root:CD: 'uow-00' <- '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
INFO:root:$('/bin/bash user_script.sh')
hostname
+ hostname
pwd
+ pwd
date
+ date
set -vex
+ set -vex
trap 'touch quiver_done.exit' EXIT
+ trap 'touch quiver_done.exit' EXIT
hostname
+ hostname
date
+ date

samtools faidx /data2/avelt/Assembly_amurensis/4-quiver/quiver-split/./refs/000153Fp01_00001/ref.fa
+ samtools faidx /data2/avelt/Assembly_amurensis/4-quiver/quiver-split/./refs/000153Fp01_00001/ref.fa
[faidx] Could not build fai index /data2/avelt/Assembly_amurensis/4-quiver/quiver-split/./refs/000153Fp01_00001/ref.fa.fai
touch quiver_done.exit
+ touch quiver_done.exit
WARNING:root:Call '/bin/bash user_script.sh' returned 256.
WARNING:root:CD: 'uow-00' -> '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
Traceback (most recent call last):
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_kit/mains/generic_run_units_of_work.py", line 115, in <module>
    main()
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_kit/mains/generic_run_units_of_work.py", line 111, in main
    run(**vars(args))
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/falcon_kit/mains/generic_run_units_of_work.py", line 64, in run
    pypeflow.do_task.run_bash(script, inputs, outputs, params)
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 178, in run_bash
    util.system(cmd)
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/io.py", line 29, in syscall
    raise Exception(msg)
Exception: Call '/bin/bash user_script.sh' returned 256.
2018-09-25 01:34:24,870 - root - WARNING - Call '/bin/bash user_script.sh' returned 256.
2018-09-25 01:34:24,870 - root - WARNING - CD: '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001' -> '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
2018-09-25 01:34:24,870 - root - WARNING - CD: '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001' -> '/data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001'
2018-09-25 01:34:24,870 - root - CRITICAL - Error in /cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py with args="{'json_fn': '/data2/avelt/Assembly_amurensis/4-quiver/quiver-r
un/000153Fp01_00001/task.json',\n 'timeout': 30,\n 'tmpdir': None}"
Traceback (most recent call last):
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 246, in <module>
    main()
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 238, in main
    run(**vars(parsed_args))
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 232, in run
    run_cfg_in_tmpdir(cfg, tmpdir)
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 208, in run_cfg_in_tmpdir
    run_bash(bash_template, myinputs, myoutputs, parameters)
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/do_task.py", line 178, in run_bash
    util.system(cmd)
  File "/cm/shared/apps/FALCON_12_09_2018/lib/python2.7/site-packages/pypeflow/io.py", line 29, in syscall
    raise Exception(msg)
Exception: Call '/bin/bash user_script.sh' returned 256.
+++ pwd
++ echo 'FAILURE. Running top in /data2/avelt/Assembly_amurensis/4-quiver/quiver-run/000153Fp01_00001 (If you see -terminal database is inaccessible- you are using the python bin-wrapper, so you will not get
diagnostic info. No big deal. This process is crashing anyway.)'
++ rm -f top.txt
++ which python
++ which top
++ env -u LD_LIBRARY_PATH top -b -n 1
++ env -u LD_LIBRARY_PATH top -b -n 1
++ pstree -apl

real    0m1.432s
user    0m0.327s
sys     0m0.164s
+ finish
+ echo 'finish code: 1'
gconcepcion commented 6 years ago

Hi Amandine,

Glad the Unzip portion worked for you.

In the polishing, it looks like you may have encountered a real bug.

I'm going to close this issue here as I believe the original issue was addressed.

In order to make sure your polishing error this doesn't get overlooked, could you please submit this issue to the official Pacbio Bioconda github issue tracker here: https://github.com/PacificBiosciences/pbbioconda/issues

All PacBio Bioconda related issues, including pb-assembly are being tracked there.

FYI: In the future, this pb-assembly repo has moved to the official PacificBiosciences github account here: https://github.com/PacificBiosciences/pb-assembly

Thanks!

a-velt commented 6 years ago

Thank you very much for your help. Best, Amandine