PacificBiosciences / FALCON

FALCON: experimental PacBio diploid assembler -- Out-of-date -- Please use a binary release: https://github.com/PacificBiosciences/FALCON_unzip/wiki/Binaries
https://github.com/PacificBiosciences/FALCON_unzip/wiki/Binaries
Other
204 stars 103 forks source link

Hi , i meet the error : Task Node(0-rawreads/build) failed with exit-code=1 #714

Closed lsj11211 closed 3 years ago

lsj11211 commented 3 years ago

look at before issue like me ,however, the "0-rawreads" just have a file called build

my error as follow:

falcon-kit 1.8.1 (pip thinks "falcon-kit 1.8.1") pypeflow 2.3.0 [INFO]Setup logging from file "None". [INFO]$ lfs setstripe -c 12 /cluster/huanglab/sluo/sluoo/三代测序/组装与比对/FALCON/contig > [INFO]Apparently '/cluster/huanglab/sluo/sluoo/三代测序/组装与比对/FALCON/contig' is not in lustre filesystem, which is fine. [INFO]fc_run started with configuration fc_run.cfg [INFO]cfg= { "General": { "LA4Falcon_preload": false, "avoid_text_file_busy": true, "bestn": 12, "dazcon": false, "falcon_sense_greedy": false, "falcon_sense_option": "--output-multi --min-idt 0.70 --min-cov 2 --max-n-read 800", "falcon_sense_skip_contained": false, "fc_ovlp_to_graph_option": " --min-len 2000", "genome_size": "0", "input_fofn": "input.fofn", "input_type": "raw", "length_cutoff": "3000", "length_cutoff_pr": "2000", "overlap_filtering_setting": "--max-diff 100 --max-cov 300 --min-cov 2", "ovlp_DBdust_option": "", "ovlp_DBsplit_option": "-x500 -s50", "ovlp_HPCdaligner_option": "-v -B128 -M24", "ovlp_daligner_option": "-e.96 -l2000 -k24 -h1024 -w6 -s100", "pa_DBdust_option": "", "pa_DBsplit_option": "-x500 -s50", "pa_HPCTANmask_option": "", "pa_HPCdaligner_option": "-v -B128 -M24", "pa_REPmask_code": "1,100;2,80;3,60", "pa_daligner_option": "-e.7 -l1000 -k18 -h80 -w8 -s100", "pa_dazcon_option": "-j 4 -x -l 500", "pa_fasta_filter_option": "pass", "pa_subsample_coverage": 0, "pa_subsample_random_seed": 12345, "pa_subsample_strategy": "random", "seed_coverage": "20", "skip_checks": false, "target": "assembly" }, "job.defaults": { "JOB_QUEUE": "default", "MB": "32768", "NPROC": "6", "job_type": "local", "njobs": "32", "pwatcher_type": "blocking", "submit": "/bin/bash -c \"${JOB_SCRIPT}\" > \"${JOB_STDOUT}\" 2> \"${JOB_STDERR}\"", "use_tmpdir": false }, "job.step.asm": { "MB": "196608", "NPROC": "24", "njobs": "1" }, "job.step.cns": { "MB": "65536", "NPROC": "8", "njobs": "5" }, "job.step.da": { "MB": "32768", "NPROC": "4", "njobs": "32" }, "job.step.dust": {}, "job.step.la": { "MB": "32768", "NPROC": "4", "njobs": "32" }, "job.step.pda": {}, "job.step.pla": { "MB": "32768", "NPROC": "4", "njobs": "4" } } [INFO]In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.blocking' from '/cluster/huanglab/sluo/sluoo/conda/env/pb-assembly/lib/python3.7/site-packages/pwatcher/blocking.py'> [INFO]job_type='local', (default)job_defaults={'job_type': 'local', 'pwatcher_type': 'blocking', 'JOB_QUEUE': 'default', 'MB': '32768', 'NPROC': '6', 'njobs': '32', 'submit': '/bin/bash -c "${JOB_SCRIPT}" > "${JOB_STDOUT}" 2> "${JOB_STDERR}"', 'use_tmpdir': False}, use_tmpdir=False, squash=False, job_name_style=0 [INFO]Setting max_jobs to 32; was None [INFO]Num unsatisfied: 2, graph: 2 [INFO]About to submit: Node(0-rawreads/build) [INFO]Popen: '/bin/bash -c "/cluster/huanglab/sluo/sluoo/conda/env/pb-assembly/lib/python3.7/site-packages/pwatcher/mains/job_start.sh" > "/cluster/huanglab/sluo/sluoo/三代测序/组装与比对/FALCON/contig/0-rawreads/build/run-P2f4592c0c65764.bash.stdout" 2> "/cluster/huanglab/sluo/sluoo/三代测序/组装与比对/FALCON/contig/0-rawreads/build/run-P2f4592c0c65764.bash.stderr"' [INFO](slept for another 0.0s -- another 1 loop iterations) [INFO](slept for another 0.30000000000000004s -- another 2 loop iterations) [ERROR]Task Node(0-rawreads/build) failed with exit-code=1 [ERROR]Some tasks are recently_done but not satisfied: {Node(0-rawreads/build)} [ERROR]ready: set() submitted: set() [ERROR]Noop. We cannot kill blocked threads. Hopefully, everything will die on SIGTERM. Traceback (most recent call last): File "/cluster/huanglab/sluo/sluoo/conda/env/pb-assembly/bin/fc_run.py", line 11, in <module> load_entry_point('falcon-kit==1.8.1', 'console_scripts', 'fc_run.py')() File "/cluster/huanglab/sluo/sluoo/conda/env/pb-assembly/lib/python3.7/site-packages/falcon_kit/mains/run1.py", line 706, in main main1(argv[0], args.config, args.logger) File "/cluster/huanglab/sluo/sluoo/conda/env/pb-assembly/lib/python3.7/site-packages/falcon_kit/mains/run1.py", line 73, in main1 input_fofn_fn=input_fofn_fn, File "/cluster/huanglab/sluo/sluoo/conda/env/pb-assembly/lib/python3.7/site-packages/falcon_kit/mains/run1.py", line 235, in run dist=Dist(NPROC=4, MB=4000, job_dict=config['job.step.da']), File "/cluster/huanglab/sluo/sluoo/conda/env/pb-assembly/lib/python3.7/site-packages/falcon_kit/pype.py", line 106, in gen_parallel_tasks wf.refreshTargets() File "/cluster/huanglab/sluo/sluoo/conda/env/pb-assembly/lib/python3.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 278, in refreshTargets self._refreshTargets(updateFreq, exitOnFailure) File "/cluster/huanglab/sluo/sluoo/conda/env/pb-assembly/lib/python3.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 362, in _refreshTargets raise Exception(msg) Exception: Some tasks are recently_done but not satisfied: {Node(0-rawreads/build)}

my config as follow:

`#### Input [General] input_fofn=input.fofn input_type=raw pa_DBdust_option= pa_fasta_filter_option=pass target=assembly skip_checks=False LA4Falcon_preload=false

Data Partitioning

pa_DBsplit_option=-x500 -s50 ovlp_DBsplit_option=-x500 -s50

Repeat Masking

pa_HPCTANmask_option= pa_REPmask_code=1,100;2,80;3,60

Pre-assembly

genome_size=0 seed_coverage=20 length_cutoff=3000
pa_HPCdaligner_option=-v -B128 -M24 pa_daligner_option=-e.7 -l1000 -k18 -h80 -w8 -s100 falcon_sense_option=--output-multi --min-idt 0.70 --min-cov 2 --max-n-read 800 falcon_sense_greedy=False

Pread overlapping

ovlp_daligner_option=-e.96 -l2000 -k24 -h1024 -w6 -s100 ovlp_HPCdaligner_option=-v -B128 -M24

Final Assembly

overlap_filtering_setting=--max-diff 100 --max-cov 300 --min-cov 2 fc_ovlp_to_graph_option= length_cutoff_pr=2000

[job.defaults] job_type=local pwatcher_type=blocking JOB_QUEUE=default MB=32768 NPROC=6 njobs=32 submit = /bin/bash -c "${JOB_SCRIPT}" > "${JOB_STDOUT}" 2> "${JOB_STDERR}"

[job.step.da] NPROC=4 MB=32768 njobs=32 [job.step.la] NPROC=4 MB=32768 njobs=32 [job.step.cns] NPROC=8 MB=65536 njobs=5 [job.step.pla] NPROC=4 MB=32768 njobs=4 [job.step.asm] NPROC=24 MB=196608 njobs=1`

and does the fasta path file (input fofn) include three path ,and three fasta in one file ?

Hope you could help me . Thank you in advance

pb-cdunn commented 3 years ago

You can look in the 0-rawreads/build/ directory for stdout/stderr. And you can re-run the command in your own shell inside that directory. That should help you debug.

The FOFN (File Of File Names) is just filename/newline/filename/newline/etc.

foo.fasta
bar.fasta
...
lsj11211 commented 3 years ago

thanks for your replaying , does the FOFN just include name instead of the path+name

pb-cdunn commented 3 years ago

The FOFN can be relative to the FOFN itself.