PacificBiosciences / FALCON

FALCON: experimental PacBio diploid assembler -- Out-of-date -- Please use a binary release: https://github.com/PacificBiosciences/FALCON_unzip/wiki/Binaries
https://github.com/PacificBiosciences/FALCON_unzip/wiki/Binaries
Other
204 stars 103 forks source link

[ERROR]Task Node(0-rawreads/build) failed with exit-code=1 #685

Closed NicMAlexandre closed 5 years ago

NicMAlexandre commented 5 years ago

Hello,

I am trying to run falcon using the command: fc_run fc_run.cfg

The input data is raw pacbio reads.

The following is my config file:

input_fofn = fasta.fofn
input_type = raw
pa_DBdust_option=true
pa_fasta_filter_option=streamed-median

pa_DBsplit_option = -a -x500 -s200
ovlp_DBsplit_option =-x500 -s200

pa_HPCTANmask_option=
pa_REPmask_code=0,300;0,300;0,300

genome_size=1100000000
seed_coverage=30
length_cutoff=-1    
pa_HPCdaligner_option=-v -B128 -M24
pa_daligner_option=-e0.8 -l2000 -k18 -h480  -w8 -s100
falcon_sense_option=--output-multi --min-idt 0.70 --min-cov 3 --max-n-read 400
falcon_sense_greedy=False

ovlp_daligner_option=-e.96 -s1000 -h60
ovlp_HPCdaligner_option=-v -M24 -l500

overlap_filtering_setting=--max-diff 100 --max-cov 100 --min-cov 2
fc_ovlp_to_graph_option=
length_cutoff_pr=1000

[job.defaults]
job_type=slurm
pwatcher_type=blocking
JOB_QUEUE = default
MB = 32768
NPROC = 6
njobs = 32
submit = srun --wait=0 -p myqueue -J ${JOB_NAME} -o ${JOB_STDOUT} -e ${JOB_STDERR} --mem-per-cpu=${MB}M --cpus-per-task=${NPROC} ${JOB_SCRIPT}

Here is the error:

falcon-kit 1.2.6
pypeflow 2.1.1
[INFO]Setup logging from file "None".
[INFO]$ lfs setstripe -c 12 /pylon5/mc5fqip/orca21 >
[WARNING]'lfs setstripe -c 12 /pylon5/mc5fqip/orca21' failed to produce any output.
[INFO]Lustre filesystem detected. This lfs stripe (12) should propagate to subdirs of '/pylon5/mc5fqip/orca21'.
[INFO]fc_run started with configuration fc_run.cfg
[INFO]cfg=
{
  "General": {
    "LA4Falcon_preload": false,
    "avoid_text_file_busy": true,
    "bestn": 12,
    "dazcon": false,
    "falcon_sense_greedy": false,
    "falcon_sense_option": "--output-multi --min-idt 0.70 --min-cov 3 --max-n-read 400",
    "falcon_sense_skip_contained": false,
    "fc_ovlp_to_graph_option": " --min-len 1000",
    "genome_size": "1100000000",
    "input_fofn": "fasta.fofn",
    "input_type": "raw",
    "length_cutoff": "-1",
    "length_cutoff_pr": "1000",
    "overlap_filtering_setting": "--max-diff 100 --max-cov 100 --min-cov 2",
    "ovlp_DBdust_option": "",
    "ovlp_DBsplit_option": "-x500 -s200",
    "ovlp_HPCdaligner_option": "-v -M24 -l500",
    "ovlp_daligner_option": "-e.96 -s1000 -h60",
    "pa_DBdust_option": "true",
    "pa_DBsplit_option": "-a -x500 -s200",
    "pa_HPCTANmask_option": "",
    "pa_HPCdaligner_option": "-v -B128 -M24",
    "pa_REPmask_code": "0,300;0,300;0,300",
    "pa_daligner_option": "-e0.8 -l2000 -k18 -h480  -w8 -s100",
    "pa_dazcon_option": "-j 4 -x -l 500",
    "pa_fasta_filter_option": "streamed-median",
    "pa_subsample_coverage": 0,
    "pa_subsample_random_seed": 12345,
    "pa_subsample_strategy": "random",
    "seed_coverage": "30",
    "skip_checks": false,
    "target": "assembly"
  },
  "job.defaults": {
    "JOB_QUEUE": "default",
    "MB": "32768",
    "NPROC": "6",
    "job_type": "slurm",
    "njobs": "32",
    "pwatcher_type": "blocking",
    "submit": "srun --wait=0 -p myqueue -J ${JOB_NAME} -o ${JOB_STDOUT} -e ${JOB_STDERR} --mem-per-cpu=${MB}M --cpus-per-task=${NPROC} ${JOB_SCRIPT}",
    "use_tmpdir": false
  },
  "job.step.asm": {},
  "job.step.cns": {},
  "job.step.da": {},
  "job.step.dust": {},
  "job.step.la": {},
  "job.step.pda": {},
  "job.step.pla": {}
}
[INFO]In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.blocking' from '/pylon5/mc5fqip/orca21/anaconda3/envs/myenv/lib/python2.7/site-packages/pwatcher/blocking.pyc'>
[INFO]job_type='slurm', (default)job_defaults={'JOB_QUEUE': 'default', 'pwatcher_type': 'blocking', 'use_tmpdir': False, 'MB': '32768', 'job_type': 'slurm', 'submit': 'srun --wait=0 -p myqueue -J ${JOB_NAME} -o ${JOB_STDOUT} -e ${JOB_STDERR} --mem-per-cpu=${MB}M --cpus-per-task=${NPROC} ${JOB_SCRIPT}', 'NPROC': '6', 'njobs': '32'}, use_tmpdir=False, squash=False, job_name_style=0
[INFO]Setting max_jobs to 32; was None
[INFO]Num unsatisfied: 2, graph: 2
[INFO]About to submit: Node(0-rawreads/build)
[INFO]Popen: 'srun --wait=0 -p myqueue -J P2c7b8ae23d7f98 -o /pylon5/mc5fqip/orca21/0-rawreads/build/run-P2c7b8ae23d7f98.bash.stdout -e /pylon5/mc5fqip/orca21/0-rawreads/build/run-P2c7b8ae23d7f98.bash.stderr --mem-per-cpu=4000M --cpus-per-task=1 /pylon5/mc5fqip/orca21/anaconda3/envs/myenv/lib/python2.7/site-packages/pwatcher/mains/job_start.sh'
[INFO](slept for another 0.0s -- another 1 loop iterations)
[INFO](slept for another 0.3s -- another 2 loop iterations)
[INFO](slept for another 1.2s -- another 3 loop iterations)
[INFO](slept for another 3.0s -- another 4 loop iterations)
srun: error: l006: tasks 0-4: Exited with exit code 1
[ERROR]Task Node(0-rawreads/build) failed with exit-code=1
[ERROR]Some tasks are recently_done but not satisfied: set([Node(0-rawreads/build)])
[ERROR]ready: set([])
    submitted: set([])
[ERROR]Noop. We cannot kill blocked threads. Hopefully, everything will die on SIGTERM.
Traceback (most recent call last):
  File "/pylon5/mc5fqip/orca21/anaconda3/envs/myenv/bin/fc_run", line 11, in <module>
    load_entry_point('falcon-kit==1.2.6', 'console_scripts', 'fc_run')()
  File "/pylon5/mc5fqip/orca21/anaconda3/envs/myenv/lib/python2.7/site-packages/falcon_kit/mains/run1.py", line 726, in main
    main1(argv[0], args.config, args.logger)
  File "/pylon5/mc5fqip/orca21/anaconda3/envs/myenv/lib/python2.7/site-packages/falcon_kit/mains/run1.py", line 76, in main1
    input_fofn_fn=input_fofn_fn,
  File "/pylon5/mc5fqip/orca21/anaconda3/envs/myenv/lib/python2.7/site-packages/falcon_kit/mains/run1.py", line 242, in run
    dist=Dist(NPROC=4, MB=4000, job_dict=config['job.step.da']),
  File "/pylon5/mc5fqip/orca21/anaconda3/envs/myenv/lib/python2.7/site-packages/falcon_kit/pype.py", line 106, in gen_parallel_tasks
    wf.refreshTargets()
  File "/pylon5/mc5fqip/orca21/anaconda3/envs/myenv/lib/python2.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 278, in refreshTargets
    self._refreshTargets(updateFreq, exitOnFailure)
  File "/pylon5/mc5fqip/orca21/anaconda3/envs/myenv/lib/python2.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 362, in _refreshTargets
    raise Exception(msg)
Exception: Some tasks are recently_done but not satisfied: set([Node(0-rawreads/build)])
pb-cdunn commented 5 years ago

You have to look into 0-rawreads/build. But please also post conda list. Someone else posted an odd bug, and I suspect that people are getting an inconsistent version of DAZZ_DB, which we provide via pb-dazzler.

pb-cdunn commented 5 years ago

Oh, in the future, please post at https://github.com/PacificBiosciences/pbbioconda/issues

NicMAlexandre commented 5 years ago

Thank you, Chris!

On Mon, Mar 11, 2019 at 11:08 AM Christopher Dunn notifications@github.com wrote:

Oh, in the future, please post at https://github.com/PacificBiosciences/pbbioconda/issues

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/PacificBiosciences/FALCON/issues/685#issuecomment-471656207, or mute the thread https://github.com/notifications/unsubscribe-auth/AUPt78I3yejZNLh0bes0xxshJCoKkRdCks5vVpuFgaJpZM4bpMOM .

-- Best,

Nicolas Alexandre PhD Candidate, Integrative Biology Whiteman Lab University of California - Berkeley nalexandre@berkeley.edu kiv@berkeley.edu