Closed fangvv closed 7 years ago
Yes, you need less memory per jobs. You can achieve that by reducing the block-size. Try reducing -s
for DBsplit. You forgot to post your current settings, but maybe try:
pa_dbsplit_option = -x500 -s100 -a
Thank you very much! However, there is still some problem as follows:
make run-synth0
make[3]: Entering directory '/falcon/FALCON-integrate/FALCON-examples'
git-sym update run/synth0
-> in dir 'run/synth0'
<- back to dir '/falcon/FALCON-integrate/FALCON-examples'
symlink: 'run/synth0/data/synth5k'
-> in dir '/falcon/FALCON-integrate/.git/modules/FALCON-examples/git-sym-local/links'
<- back to dir '/falcon/FALCON-integrate/FALCON-examples'
git-sym show run/synth0
-> in dir 'run/synth0'
<- back to dir '/falcon/FALCON-integrate/FALCON-examples'
symlink: 'run/synth0/data/synth5k'
/ run/synth0/data/synth5k .git-sym/synth5k.2016-11-02
git-sym check run/synth0
-> in dir 'run/synth0'
<- back to dir '/falcon/FALCON-integrate/FALCON-examples'
symlink: 'run/synth0/data/synth5k'
cd run/synth0; fc_run.py fc_run.cfg logging.ini
2017-01-10 01:28:21,508[INFO] Setup logging from file "logging.ini".
2017-01-10 01:28:21,509[INFO] fc_run started with configuration fc_run.cfg
2017-01-10 01:28:21,510[INFO] No target specified, assuming "assembly" as target
2017-01-10 01:28:21,517[WARNING] In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.fs_based' from '/falcon/FALCON-integrate/pypeFLOW/pwatcher/fs_based.pyc'>
2017-01-10 01:28:21,518[INFO] In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.fs_based' from '/falcon/FALCON-integrate/pypeFLOW/pwatcher/fs_based.pyc'>
2017-01-10 01:28:21,521[INFO] job_type='local', job_queue='production', sge_option='-pe smp 8 -q production', use_tmpdir='/tmp', squash=True
2017-01-10 01:28:21,524[INFO] Num unsatisfied: 1, graph: 1
2017-01-10 01:28:21,530[INFO] About to submit: Node(0-rawreads/raw-fofn-abs)
2017-01-10 01:28:21,532[INFO] starting job Job(jobid='P6d9839a59774b4', cmd='/bin/bash run.sh', rundir='/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/raw-fofn-abs', options={'job_queue': 'production', 'sge_option': '-pe smp 8 -q production', 'job_type': 'local'})
2017-01-10 01:28:21,548[INFO] Submitted backgroundjob=MetaJobLocal(MetaJob(job=Job(jobid='P6d9839a59774b4', cmd='/bin/bash run.sh', rundir='/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/raw-fofn-abs', options={'job_queue': 'production', 'sge_option': '-pe smp 8 -q production', 'job_type': 'local'}), lang_exe='/bin/bash'))
2017-01-10 01:28:21,551[INFO] sleep 0.1s
2017-01-10 01:28:21,653[INFO] sleep 0.2s
2017-01-10 01:28:21,855[INFO] recently_satisfied: set([Node(0-rawreads/raw-fofn-abs)])
2017-01-10 01:28:21,855[INFO] Num satisfied in this iteration: 1
2017-01-10 01:28:21,855[INFO] Num still unsatisfied: 0
2017-01-10 01:28:21,927[INFO] Num unsatisfied: 1, graph: 2
2017-01-10 01:28:21,927[INFO] About to submit: Node(0-rawreads)
2017-01-10 01:28:21,928[INFO] starting job Job(jobid='P2f2cdf95fa2424', cmd='/bin/bash run.sh', rundir='/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads', options={'job_queue': 'production', 'sge_option': '-pe smp 8 -q production', 'job_type': 'local'})
2017-01-10 01:28:21,939[INFO] Submitted backgroundjob=MetaJobLocal(MetaJob(job=Job(jobid='P2f2cdf95fa2424', cmd='/bin/bash run.sh', rundir='/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads', options={'job_queue': 'production', 'sge_option': '-pe smp 8 -q production', 'job_type': 'local'}), lang_exe='/bin/bash'))
2017-01-10 01:28:21,940[INFO] sleep 0.1s
2017-01-10 01:28:22,041[INFO] sleep 0.2s
2017-01-10 01:28:22,243[INFO] sleep 0.3s
2017-01-10 01:28:22,545[INFO] recently_satisfied: set([Node(0-rawreads)])
2017-01-10 01:28:22,545[INFO] Num satisfied in this iteration: 1
2017-01-10 01:28:22,545[INFO] Num still unsatisfied: 0
2017-01-10 01:28:22,553[INFO] Num unsatisfied: 1, graph: 3
2017-01-10 01:28:22,553[INFO] About to submit: Node(0-rawreads/daligner-scatter)
2017-01-10 01:28:22,555[INFO] starting job Job(jobid='Pf0536d33491d3d', cmd='/bin/bash run.sh', rundir='/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/daligner-scatter', options={'job_queue': 'production', 'sge_option': '-pe smp 8 -q production', 'job_type': 'local'})
2017-01-10 01:28:22,560[INFO] Submitted backgroundjob=MetaJobLocal(MetaJob(job=Job(jobid='Pf0536d33491d3d', cmd='/bin/bash run.sh', rundir='/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/daligner-scatter', options={'job_queue': 'production', 'sge_option': '-pe smp 8 -q production', 'job_type': 'local'}), lang_exe='/bin/bash'))
2017-01-10 01:28:22,560[INFO] sleep 0.1s
2017-01-10 01:28:22,662[INFO] sleep 0.2s
2017-01-10 01:28:22,863[INFO] recently_satisfied: set([Node(0-rawreads/daligner-scatter)])
2017-01-10 01:28:22,863[INFO] Num satisfied in this iteration: 1
2017-01-10 01:28:22,863[INFO] Num still unsatisfied: 0
2017-01-10 01:28:22,875[INFO] Num unsatisfied: 2, graph: 5
2017-01-10 01:28:22,875[INFO] About to submit: Node(0-rawreads/job_0000)
2017-01-10 01:28:22,877[INFO] starting job Job(jobid='P56d4326c804e7a', cmd='/bin/bash run.sh', rundir=u'/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000', options={'job_queue': 'production', 'sge_option': u'-pe smp 8 -q production', 'job_type': 'local'})
2017-01-10 01:28:22,895[INFO] Submitted backgroundjob=MetaJobLocal(MetaJob(job=Job(jobid='P56d4326c804e7a', cmd='/bin/bash run.sh', rundir=u'/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000', options={'job_queue': 'production', 'sge_option': u'-pe smp 8 -q production', 'job_type': 'local'}), lang_exe='/bin/bash'))
2017-01-10 01:28:22,899[INFO] sleep 0.1s
2017-01-10 01:28:23,000[INFO] sleep 0.2s
2017-01-10 01:28:23,201[INFO] sleep 0.3s
2017-01-10 01:28:23,503[ERROR] Task Node(0-rawreads/job_0000) failed with exit-code=256
2017-01-10 01:28:23,503[INFO] recently_satisfied: set([])
2017-01-10 01:28:23,517[INFO] Num satisfied in this iteration: 0
2017-01-10 01:28:23,517[INFO] Num still unsatisfied: 2
2017-01-10 01:28:23,533[ERROR] Some tasks are recently_done but not satisfied: set([Node(0-rawreads/job_0000)])
2017-01-10 01:28:23,533[ERROR] ready: set([])
submitted: set([])
Traceback (most recent call last):
File "/falcon/FALCON-integrate/fc_env/bin/fc_run.py", line 6, in <module>
exec(compile(open(__file__).read(), __file__, 'exec'))
File "/falcon/FALCON-integrate/FALCON/src/py_scripts/fc_run.py", line 5, in <module>
main(sys.argv)
File "/falcon/FALCON-integrate/FALCON/falcon_kit/mains/run1.py", line 461, in main
main1(argv[0], args.config, args.logger)
File "/falcon/FALCON-integrate/FALCON/falcon_kit/mains/run1.py", line 136, in main1
input_fofn_plf=input_fofn_plf,
File "/falcon/FALCON-integrate/FALCON/falcon_kit/mains/run1.py", line 234, in run
wf.refreshTargets(exitOnFailure=exitOnFailure)
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/simple_pwatcher_bridge.py", line 226, in refreshTargets
self._refreshTargets(updateFreq, exitOnFailure)
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/simple_pwatcher_bridge.py", line 297, in _refreshTargets
failures, len(unsatg)))
Exception: We had 1 failures. 2 tasks remain unsatisfied.
makefile:4: recipe for target 'run-synth0' failed
make[3]: *** [run-synth0] Error 1
make[3]: Leaving directory '/falcon/FALCON-integrate/FALCON-examples'
makefile:11: recipe for target 'test' failed
make[2]: *** [test] Error 2
make[2]: Leaving directory '/falcon/FALCON-integrate/FALCON-examples'
makefile:67: recipe for target 'test' failed
make[1]: *** [test] Error 2
make[1]: Leaving directory '/falcon/FALCON-integrate/FALCON-make'
makefile:23: recipe for target 'test' failed
make: *** [test] Error 2
The job stderr shows as follows:
root@bio:/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/pwatcher.dir# cat stderr
+ python2.7 -m pwatcher.mains.fs_heartbeat --directory=/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000 --heartbeat-file=/falcon/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/heartbeats/heartbeat-P56d4326c804e7a --exit-file=/falcon/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/exits/exit-P56d4326c804e7a --rate=10.0 /bin/bash run.sh
Namespace(command=['/bin/bash', 'run.sh'], directory='/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000', exit_file='/falcon/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/exits/exit-P56d4326c804e7a', heartbeat_file='/falcon/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/heartbeats/heartbeat-P56d4326c804e7a', rate=10.0)
cwd:'/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000'
hostname=bio
heartbeat_fn='/falcon/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/heartbeats/heartbeat-P56d4326c804e7a'
exit_fn='/falcon/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/exits/exit-P56d4326c804e7a'
sleep_s=10.0
before setpgid: pid=3276 pgid=2887
after setpgid: pid=3276 pgid=3276
In cwd: /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000, Blocking call: '/bin/bash run.sh'
cd /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
+ cd /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
bash task.sh
+ bash task.sh
2017-01-10 09:28:23,016 - root - DEBUG - Running "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py --tmpdir /tmp /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/task.json"
2017-01-10 09:28:23,017 - root - DEBUG - Checking existence of '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/task.json' with timeout=60
2017-01-10 09:28:23,018 - root - DEBUG - Loading JSON from '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/task.json'
2017-01-10 09:28:23,024 - root - DEBUG - {u'inputs': {u'db_build_done': u'/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/rdb_build_done',
u'scatter_fn': u'/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/daligner-scatter/scattered.json'},
u'outputs': {u'job_done': u'job_0000_done'},
u'parameters': {u'config': {u'avoid_text_file_busy': True,
u'cns_concurrent_jobs': 32,
u'dazcon': False,
u'dust': False,
u'falcon_sense_greedy': False,
u'falcon_sense_option': u'--output_multi --min_idt 0.70 --min_cov 1 --max_n_read 20000 --n_core 0',
u'falcon_sense_skip_contained': False,
u'fc_ovlp_to_graph_option': u' --min_len 1',
u'genome_size': 5000,
u'input_fofn': u'input.fofn',
u'input_type': u'raw',
u'install_prefix': u'/usr/local',
u'job_queue': u'production',
u'job_type': u'local',
u'length_cutoff': -1,
u'length_cutoff_pr': 1,
u'overlap_filtering_setting': u'--max_diff 10000 --max_cov 100000 --min_cov 1 --min_len 1 --bestn 1000 --n_core 0',
u'ovlp_DBsplit_option': u'-a -x5 -s50',
u'ovlp_HPCdaligner_option': u'-v -B4 -t50 -h1 -e.99 -l1 -s1000',
u'ovlp_concurrent_jobs': 32,
u'pa_DBdust_option': u'-w128 -t2.5 -m20',
u'pa_DBsplit_option': u'-x500 -s100 -a',
u'pa_HPCdaligner_option': u'-v -B4 -t50 -h1 -e.99 -w1 -l1 -s1000',
u'pa_concurrent_jobs': 32,
u'pa_dazcon_option': u'-j 4 -x -l 500',
u'pwatcher_directory': u'mypwatcher',
u'pwatcher_type': u'fs_based',
u'seed_coverage': 20.0,
u'sge_option': u'-pe smp 8 -q production',
u'sge_option_cns': u'-pe smp 8 -q production',
u'sge_option_da': u'-pe smp 8 -q production',
u'sge_option_fc': u'-pe smp 24 -q production',
u'sge_option_la': u'-pe smp 2 -q production',
u'sge_option_pda': u'-pe smp 8 -q production',
u'sge_option_pla': u'-pe smp 2 -q production',
u'skip_checks': False,
u'stop_all_jobs_on_failure': False,
u'target': u'assembly',
u'use_tmpdir': True},
u'daligner_script': u'\ndb_dir=/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads\nln -sf ${db_dir}/.raw_reads.bps .\nln -sf ${db_dir}/.raw_reads.idx .\nln -sf ${db_dir}/raw_reads.db .\nln -sf ${db_dir}/.raw_reads.dust.anno .\nln -sf ${db_dir}/.raw_reads.dust.data .\ndaligner -v -w1 -h1 -t50 -H2000 -e0.99 -l1 -s1000 raw_reads.1 raw_reads.1\nLAcheck -v raw_reads *.las\nLAsort -v raw_reads.1.raw_reads.1.C0 raw_reads.1.raw_reads.1.N0 raw_reads.1.raw_reads.1.C1 raw_reads.1.raw_reads.1.N1 raw_reads.1.raw_reads.1.C2 raw_reads.1.raw_reads.1.N2 raw_reads.1.raw_reads.1.C3 raw_reads.1.raw_reads.1.N3 && LAmerge -v raw_reads.1 raw_reads.1.raw_reads.1.C0.S raw_reads.1.raw_reads.1.N0.S raw_reads.1.raw_reads.1.C1.S raw_reads.1.raw_reads.1.N1.S raw_reads.1.raw_reads.1.C2.S raw_reads.1.raw_reads.1.N2.S raw_reads.1.raw_reads.1.C3.S raw_reads.1.raw_reads.1.N3.S\nLAcheck -vS raw_reads raw_reads.1\n\nrm -f *.C?.las *.C?.S.las *.C??.las *.C??.S.las *.C???.las *.C???.S.las\nrm -f *.N?.las *.N?.S.las *.N??.las *.N??.S.las *.N???.las *.N???.S.las\n',
u'db_prefix': u'raw_reads',
u'job_uid': u'0000',
u'sge_option': u'-pe smp 8 -q production'},
u'python_function': u'falcon_kit.pype_tasks.task_run_daligner'}
2017-01-10 09:28:23,025 - root - DEBUG - CD: '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000' <- '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000'
2017-01-10 09:28:23,026 - root - DEBUG - Checking existence of u'/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/daligner-scatter/scattered.json' with timeout=60
2017-01-10 09:28:23,026 - root - DEBUG - Checking existence of u'/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/rdb_build_done' with timeout=60
2017-01-10 09:28:23,039 - root - INFO - mkdir -p /tmp/root/pypetmp//falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
2017-01-10 09:28:23,043 - root - DEBUG - CD: '/tmp/root/pypetmp//falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000' <- '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000'
2017-01-10 09:28:23,044 - pypeflow.do_support - INFO - !/bin/bash -vex /tmp/root/pypetmp/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/rj_0000.sh
#!/bin/bash
set -vex
+ set -vex
db_dir=/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads
+ db_dir=/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads
ln -sf ${db_dir}/.raw_reads.bps .
+ ln -sf /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/.raw_reads.bps .
ln -sf ${db_dir}/.raw_reads.idx .
+ ln -sf /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/.raw_reads.idx .
ln -sf ${db_dir}/raw_reads.db .
+ ln -sf /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/raw_reads.db .
ln -sf ${db_dir}/.raw_reads.dust.anno .
+ ln -sf /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/.raw_reads.dust.anno .
ln -sf ${db_dir}/.raw_reads.dust.data .
+ ln -sf /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/.raw_reads.dust.data .
daligner -v -w1 -h1 -t50 -H2000 -e0.99 -l1 -s1000 raw_reads.1 raw_reads.1
+ daligner -v -w1 -h1 -t50 -H2000 -e0.99 -l1 -s1000 raw_reads.1 raw_reads.1
LAcheck -v raw_reads *.las
+ LAcheck -v raw_reads raw_reads.1.raw_reads.1.C0.las raw_reads.1.raw_reads.1.C1.las raw_reads.1.raw_reads.1.C2.las raw_reads.1.raw_reads.1.C3.las raw_reads.1.raw_reads.1.N0.las raw_reads.1.raw_reads.1.N1.las raw_reads.1.raw_reads.1.N2.las raw_reads.1.raw_reads.1.N3.las
raw_reads.1.raw_reads.1.C0: 0 all OK
raw_reads.1.raw_reads.1.C1: 0 all OK
raw_reads.1.raw_reads.1.C2: 0 all OK
raw_reads.1.raw_reads.1.C3: 0 all OK
raw_reads.1.raw_reads.1.N0: 506 all OK
raw_reads.1.raw_reads.1.N1: 518 all OK
raw_reads.1.raw_reads.1.N2: 462 all OK
raw_reads.1.raw_reads.1.N3: 478 all OK
LAsort -v raw_reads.1.raw_reads.1.C0 raw_reads.1.raw_reads.1.N0 raw_reads.1.raw_reads.1.C1 raw_reads.1.raw_reads.1.N1 raw_reads.1.raw_reads.1.C2 raw_reads.1.raw_reads.1.N2 raw_reads.1.raw_reads.1.C3 raw_reads.1.raw_reads.1.N3 && LAmerge -v raw_reads.1 raw_reads.1.raw_reads.1.C0.S raw_reads.1.raw_reads.1.N0.S raw_reads.1.raw_reads.1.C1.S raw_reads.1.raw_reads.1.N1.S raw_reads.1.raw_reads.1.C2.S raw_reads.1.raw_reads.1.N2.S raw_reads.1.raw_reads.1.C3.S raw_reads.1.raw_reads.1.N3.S
+ LAsort -v raw_reads.1.raw_reads.1.C0 raw_reads.1.raw_reads.1.N0 raw_reads.1.raw_reads.1.C1 raw_reads.1.raw_reads.1.N1 raw_reads.1.raw_reads.1.C2 raw_reads.1.raw_reads.1.N2 raw_reads.1.raw_reads.1.C3 raw_reads.1.raw_reads.1.N3
+ LAmerge -v raw_reads.1 raw_reads.1.raw_reads.1.C0.S raw_reads.1.raw_reads.1.N0.S raw_reads.1.raw_reads.1.C1.S raw_reads.1.raw_reads.1.N1.S raw_reads.1.raw_reads.1.C2.S raw_reads.1.raw_reads.1.N2.S raw_reads.1.raw_reads.1.C3.S raw_reads.1.raw_reads.1.N3.S
LAmerge: Out of memory (Allocating LAmerge blocks)
2017-01-10 09:28:23,211 - root - DEBUG - CD: '/tmp/root/pypetmp//falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000' -> '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000'
2017-01-10 09:28:23,212 - root - DEBUG - CD: '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000' -> '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000'
2017-01-10 09:28:23,213 - root - CRITICAL - Error in /falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py with args="{'json_fn': '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/task.json',\n 'timeout': 60,\n 'tmpdir': '/tmp'}"
Traceback (most recent call last):
File "/usr/local/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/local/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py", line 190, in <module>
main()
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py", line 182, in main
run(**vars(parsed_args))
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py", line 136, in run
run_cfg_in_tmpdir(cfg, tmpdir)
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py", line 160, in run_cfg_in_tmpdir
run_python_func(func, myinputs, myoutputs, parameters)
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py", line 125, in run_python_func
do_support.run_bash(script_fn)
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_support.py", line 51, in run_bash
raise Exception('{} <- {!r}'.format(rc, cmd))
Exception: 256 <- '/bin/bash -vex /tmp/root/pypetmp/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/rj_0000.sh'
real 0m0.260s
user 0m0.104s
sys 0m0.160s
returned: 256
The current setting of fc_run.cfg for synth0(default setting with some changes on pa_dbsplit_option according to your suggestion):
[General]
use_tmpdir = true
job_type = local
#job_type = sge
#stop_all_jobs_on_failure = true
# list of files of the initial bas.h5 files
input_fofn = input.fofn
#input_fofn = preads.fofn
input_type = raw
#input_type = preads
# The length cutoff used for seed reads used for initial mapping
#length_cutoff = 1
genome_size = 5000
seed_coverage = 20
# The length cutoff used for seed reads usef for pre-assembly
length_cutoff_pr = 1
job_queue = production
sge_option_da = -pe smp 8 -q %(job_queue)s
sge_option_la = -pe smp 2 -q %(job_queue)s
sge_option_pda = -pe smp 8 -q %(job_queue)s
sge_option_pla = -pe smp 2 -q %(job_queue)s
sge_option_fc = -pe smp 24 -q %(job_queue)s
sge_option_cns = -pe smp 8 -q %(job_queue)s
pa_concurrent_jobs = 32
cns_concurrent_jobs = 32
ovlp_concurrent_jobs = 32
pa_HPCdaligner_option = -v -B4 -t50 -h1 -e.99 -w1 -l1 -s1000
ovlp_HPCdaligner_option = -v -B4 -t50 -h1 -e.99 -l1 -s1000
#pa_DBsplit_option = -a -x5 -s.00065536
pa_DBsplit_option = -x500 -s100 -a
#pa_DBsplit_option = -a -x5 -s1
ovlp_DBsplit_option = -a -x5 -s50
falcon_sense_option = --output_multi --min_idt 0.70 --min_cov 1 --max_n_read 20000 --n_core 0
#--min_cov_aln 1 --min_len_aln 40
overlap_filtering_setting = --max_diff 10000 --max_cov 100000 --min_cov 1 --min_len 1 --bestn 1000 --n_core 0
#dazcon = 1
I don't know if it has something to do with my VM host (a small test-bed). I hope to know which type of machine should I use for the falcon tool.
Thanks.
Try running that LAmerge commend from the command-line. If that works, then login to the VM and try it there. Maybe you can narrow down the problem.
Sorry, I really don't know how to do what you suggested.
I can run the LAmerge without arguments:
root@bio:/falcon/FALCON-integrate/DALIGNER# ./LAmerge
Usage: LAmerge [-va] <merge:las> <parts:las> ...
I tried to run the run.sh in job_0000, and I got something like this:
root@bio:/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000# bash run.sh
cd /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
+ cd /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
bash task.sh
+ bash task.sh
bio
CLASSPATH=.:/home/fang/jdk1.8.0_112/lib:/home/fang/jdk1.8.0_112/jre/lib:
HOME=/root
JAVA_HOME=/home/fang/jdk1.8.0_112
JRE_HOME=/home/fang/jdk1.8.0_112/jre
LANG=en_US.UTF-8
LC_ADDRESS=zh_CN.UTF-8
LC_IDENTIFICATION=zh_CN.UTF-8
LC_MEASUREMENT=zh_CN.UTF-8
LC_MONETARY=zh_CN.UTF-8
LC_NAME=zh_CN.UTF-8
LC_NUMERIC=zh_CN.UTF-8
LC_PAPER=zh_CN.UTF-8
LC_TELEPHONE=zh_CN.UTF-8
LC_TIME=zh_CN.UTF-8
LESSCLOSE=/usr/bin/lesspipe %s %s
LESSOPEN=| /usr/bin/lesspipe %s
LOGNAME=root
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
MAIL=/var/mail/root
PATH=/root/.pyenv/shims:/root/.pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
PWD=/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
PYENV_ROOT=/root/.pyenv
PYENV_SHELL=bash
QT_QPA_PLATFORMTHEME=appmenu-qt5
SHELL=/bin/bash
SHLVL=4
SSH_CLIENT=192.168.102.1 51816 22
SSH_CONNECTION=192.168.102.1 51816 192.168.102.128 22
SSH_TTY=/dev/pts/18
TERM=xterm
USER=root
_=/usr/bin/env
XDG_RUNTIME_DIR=/run/user/1000
XDG_SESSION_ID=1
/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
/usr/local/bin/python2.7: No module named pypeflow
real 0m0.029s
user 0m0.008s
sys 0m0.020s
When I use the VM, I really need to login to it to run the program... What I want to know is, if 2GB memory is too little for the test data "synth0".
2GB should be fine for synth0.
/usr/local/bin/python2.7: No module named pypeflow
You probably need to set PYTHONUSERBASE
in your shell in the VM. In theory, if it is in your shell on the main host, then qsub will pass it along to qsub jobs. So it might be missing elsewhere too. See env.sh
where you installed FALCON-integrate from.
Thank you.
I got the following errors:
root@bio:/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000# bash run.sh
cd /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
+ cd /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
bash task.sh
+ bash task.sh
bio
CLASSPATH=.:/home/fang/jdk1.8.0_112/lib:/home/fang/jdk1.8.0_112/jre/lib:
FALCON_PREFIX=/falcon/FALCON-integrate/fc_env
FALCON_WORKSPACE=/falcon/FALCON-integrate
HOME=/root
JAVA_HOME=/home/fang/jdk1.8.0_112
JRE_HOME=/home/fang/jdk1.8.0_112/jre
LANG=en_US.UTF-8
LC_ADDRESS=zh_CN.UTF-8
LC_IDENTIFICATION=zh_CN.UTF-8
LC_MEASUREMENT=zh_CN.UTF-8
LC_MONETARY=zh_CN.UTF-8
LC_NAME=zh_CN.UTF-8
LC_NUMERIC=zh_CN.UTF-8
LC_PAPER=zh_CN.UTF-8
LC_TELEPHONE=zh_CN.UTF-8
LC_TIME=zh_CN.UTF-8
LESSCLOSE=/usr/bin/lesspipe %s %s
LESSOPEN=| /usr/bin/lesspipe %s
LOGNAME=root
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
MAIL=/var/mail/root
PATH=/falcon/FALCON-integrate/fc_env/bin:/falcon/FALCON-integrate/fc_env/bin:/root/.pyenv/shims:/root/.pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
PWD=/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
PYENV_ROOT=/root/.pyenv
PYENV_SHELL=bash
PYTHONUSERBASE=/falcon/FALCON-integrate/fc_env
QT_QPA_PLATFORMTHEME=appmenu-qt5
SHELL=/bin/bash
SHLVL=4
SSH_CLIENT=192.168.102.1 58075 22
SSH_CONNECTION=192.168.102.1 58075 192.168.102.128 22
SSH_TTY=/dev/pts/18
TERM=xterm
USER=root
_=/usr/bin/env
XDG_RUNTIME_DIR=/run/user/1000
XDG_SESSION_ID=1
/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
2017-01-12 23:08:45,776 - root - DEBUG - Running "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py --tmpdir /tmp /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/task.json"
2017-01-12 23:08:45,782 - root - DEBUG - Checking existence of '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/task.json' with timeout=60
2017-01-12 23:08:45,784 - root - DEBUG - Loading JSON from '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/task.json'
2017-01-12 23:08:45,803 - root - DEBUG - {u'inputs': {u'db_build_done': u'/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/rdb_build_done',
u'scatter_fn': u'/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/daligner-scatter/scattered.json'},
u'outputs': {u'job_done': u'job_0000_done'},
u'parameters': {u'config': {u'avoid_text_file_busy': True,
u'cns_concurrent_jobs': 32,
u'dazcon': False,
u'dust': False,
u'falcon_sense_greedy': False,
u'falcon_sense_option': u'--output_multi --min_idt 0.70 --min_cov 1 --max_n_read 20000 --n_core 0',
u'falcon_sense_skip_contained': False,
u'fc_ovlp_to_graph_option': u' --min_len 1',
u'genome_size': 5000,
u'input_fofn': u'input.fofn',
u'input_type': u'raw',
u'install_prefix': u'/usr/local',
u'job_queue': u'production',
u'job_type': u'local',
u'length_cutoff': -1,
u'length_cutoff_pr': 1,
u'overlap_filtering_setting': u'--max_diff 10000 --max_cov 100000 --min_cov 1 --min_len 1 --bestn 1000 --n_core 0',
u'ovlp_DBsplit_option': u'-a -x5 -s50',
u'ovlp_HPCdaligner_option': u'-v -B4 -t50 -h1 -e.99 -l1 -s1000',
u'ovlp_concurrent_jobs': 32,
u'pa_DBdust_option': u'-w128 -t2.5 -m20',
u'pa_DBsplit_option': u'-x500 -s100 -a',
u'pa_HPCdaligner_option': u'-v -B4 -t50 -h1 -e.99 -w1 -l1 -s1000',
u'pa_concurrent_jobs': 32,
u'pa_dazcon_option': u'-j 4 -x -l 500',
u'pwatcher_directory': u'mypwatcher',
u'pwatcher_type': u'fs_based',
u'seed_coverage': 20.0,
u'sge_option': u'-pe smp 8 -q production',
u'sge_option_cns': u'-pe smp 8 -q production',
u'sge_option_da': u'-pe smp 8 -q production',
u'sge_option_fc': u'-pe smp 24 -q production',
u'sge_option_la': u'-pe smp 2 -q production',
u'sge_option_pda': u'-pe smp 8 -q production',
u'sge_option_pla': u'-pe smp 2 -q production',
u'skip_checks': False,
u'stop_all_jobs_on_failure': False,
u'target': u'assembly',
u'use_tmpdir': True},
u'daligner_script': u'\ndb_dir=/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads\nln -sf ${db_dir}/.raw_reads.bps .\nln -sf ${db_dir}/.raw_reads.idx .\nln -sf ${db_dir}/raw_reads.db .\nln -sf ${db_dir}/.raw_reads.dust.anno .\nln -sf ${db_dir}/.raw_reads.dust.data .\ndaligner -v -w1 -h1 -t50 -H2000 -e0.99 -l1 -s1000 raw_reads.1 raw_reads.1\nLAcheck -v raw_reads *.las\nLAsort -v raw_reads.1.raw_reads.1.C0 raw_reads.1.raw_reads.1.N0 raw_reads.1.raw_reads.1.C1 raw_reads.1.raw_reads.1.N1 raw_reads.1.raw_reads.1.C2 raw_reads.1.raw_reads.1.N2 raw_reads.1.raw_reads.1.C3 raw_reads.1.raw_reads.1.N3 && LAmerge -v raw_reads.1 raw_reads.1.raw_reads.1.C0.S raw_reads.1.raw_reads.1.N0.S raw_reads.1.raw_reads.1.C1.S raw_reads.1.raw_reads.1.N1.S raw_reads.1.raw_reads.1.C2.S raw_reads.1.raw_reads.1.N2.S raw_reads.1.raw_reads.1.C3.S raw_reads.1.raw_reads.1.N3.S\nLAcheck -vS raw_reads raw_reads.1\n\nrm -f *.C?.las *.C?.S.las *.C??.las *.C??.S.las *.C???.las *.C???.S.las\nrm -f *.N?.las *.N?.S.las *.N??.las *.N??.S.las *.N???.las *.N???.S.las\n',
u'db_prefix': u'raw_reads',
u'job_uid': u'0000',
u'sge_option': u'-pe smp 8 -q production'},
u'python_function': u'falcon_kit.pype_tasks.task_run_daligner'}
2017-01-12 23:08:45,807 - root - DEBUG - CD: '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000' <- '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000'
2017-01-12 23:08:45,807 - root - DEBUG - Checking existence of u'/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/daligner-scatter/scattered.json' with timeout=60
2017-01-12 23:08:45,859 - root - DEBUG - Checking existence of u'/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/rdb_build_done' with timeout=60
2017-01-12 23:08:46,578 - root - INFO - mkdir -p /tmp/root/pypetmp//falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000
2017-01-12 23:08:46,588 - root - DEBUG - CD: '/tmp/root/pypetmp//falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000' <- '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000'
2017-01-12 23:08:46,609 - pypeflow.do_support - INFO - !/bin/bash -vex /tmp/root/pypetmp/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/rj_0000.sh
#!/bin/bash
set -vex
+ set -vex
db_dir=/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads
+ db_dir=/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads
ln -sf ${db_dir}/.raw_reads.bps .
+ ln -sf /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/.raw_reads.bps .
ln -sf ${db_dir}/.raw_reads.idx .
+ ln -sf /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/.raw_reads.idx .
ln -sf ${db_dir}/raw_reads.db .
+ ln -sf /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/raw_reads.db .
ln -sf ${db_dir}/.raw_reads.dust.anno .
+ ln -sf /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/.raw_reads.dust.anno .
ln -sf ${db_dir}/.raw_reads.dust.data .
+ ln -sf /falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/.raw_reads.dust.data .
daligner -v -w1 -h1 -t50 -H2000 -e0.99 -l1 -s1000 raw_reads.1 raw_reads.1
+ daligner -v -w1 -h1 -t50 -H2000 -e0.99 -l1 -s1000 raw_reads.1 raw_reads.1
Building index for raw_reads.1
Kshift=28
BSHIFT=8
TooFrequent=50
(Kshift-1)/BSHIFT + (TooFrequent < INT32_MAX)=4
sizeof(KmerPos)=16
nreads=50
Kmer=14
block->reads[nreads].boff=100050
kmers=99350
sizeof(KmerPos)*(kmers+1)=1589616
Allocated 99351 of 16 (1589616 bytes) at 0x7f940cef0010
Kmer count = 99,350
Using 0.00Gb of space
Revised kmer count = 99,350
Index occupies 0.00Gb
Comparing raw_reads.1 to raw_reads.1
Capping mutual k-mer matches over 10000 (effectively -t100)
Hit count = 969,209
Highwater of 0.03Gb space
969,209 14-mers (9.692090e-05 of matrix)
982 seed hits (9.820000e-08 of matrix)
1,964 confirmed hits (1.964000e-07 of matrix)
Building index for c(raw_reads.1)
Kshift=28
BSHIFT=8
TooFrequent=50
(Kshift-1)/BSHIFT + (TooFrequent < INT32_MAX)=4
sizeof(KmerPos)=16
nreads=50
Kmer=14
block->reads[nreads].boff=100050
kmers=99350
sizeof(KmerPos)*(kmers+1)=1589616
Allocated 99351 of 16 (1589616 bytes) at 0xba8d90
Kmer count = 99,350
Using 0.00Gb of space
Revised kmer count = 99,350
Index occupies 0.00Gb
Comparing raw_reads.1 to c(raw_reads.1)
Capping mutual k-mer matches over 10000 (effectively -t100)
Hit count = 0
Highwater of 0.00Gb space
0 14-mers (0.000000e+00 of matrix)
0 seed hits (0.000000e+00 of matrix)
0 confirmed hits (0.000000e+00 of matrix)
LAcheck -v raw_reads *.las
+ LAcheck -v raw_reads raw_reads.1.raw_reads.1.C0.las raw_reads.1.raw_reads.1.C1.las raw_reads.1.raw_reads.1.C2.las raw_reads.1.raw_reads.1.C3.las raw_reads.1.raw_reads.1.N0.las raw_reads.1.raw_reads.1.N1.las raw_reads.1.raw_reads.1.N2.las raw_reads.1.raw_reads.1.N3.las
raw_reads.1.raw_reads.1.C0: 0 all OK
raw_reads.1.raw_reads.1.C1: 0 all OK
raw_reads.1.raw_reads.1.C2: 0 all OK
raw_reads.1.raw_reads.1.C3: 0 all OK
raw_reads.1.raw_reads.1.N0: 506 all OK
raw_reads.1.raw_reads.1.N1: 518 all OK
raw_reads.1.raw_reads.1.N2: 462 all OK
raw_reads.1.raw_reads.1.N3: 478 all OK
LAsort -v raw_reads.1.raw_reads.1.C0 raw_reads.1.raw_reads.1.N0 raw_reads.1.raw_reads.1.C1 raw_reads.1.raw_reads.1.N1 raw_reads.1.raw_reads.1.C2 raw_reads.1.raw_reads.1.N2 raw_reads.1.raw_reads.1.C3 raw_reads.1.raw_reads.1.N3 && LAmerge -v raw_reads.1 raw_reads.1.raw_reads.1.C0.S raw_reads.1.raw_reads.1.N0.S raw_reads.1.raw_reads.1.C1.S raw_reads.1.raw_reads.1.N1.S raw_reads.1.raw_reads.1.C2.S raw_reads.1.raw_reads.1.N2.S raw_reads.1.raw_reads.1.C3.S raw_reads.1.raw_reads.1.N3.S
+ LAsort -v raw_reads.1.raw_reads.1.C0 raw_reads.1.raw_reads.1.N0 raw_reads.1.raw_reads.1.C1 raw_reads.1.raw_reads.1.N1 raw_reads.1.raw_reads.1.C2 raw_reads.1.raw_reads.1.N2 raw_reads.1.raw_reads.1.C3 raw_reads.1.raw_reads.1.N3
raw_reads.1.raw_reads.1.C0: 0 records 12 trace bytes
raw_reads.1.raw_reads.1.N0: 506 records 3,012 trace bytes
raw_reads.1.raw_reads.1.C1: 0 records 12 trace bytes
raw_reads.1.raw_reads.1.N1: 518 records 3,124 trace bytes
raw_reads.1.raw_reads.1.C2: 0 records 12 trace bytes
raw_reads.1.raw_reads.1.N2: 462 records 2,764 trace bytes
raw_reads.1.raw_reads.1.C3: 0 records 12 trace bytes
raw_reads.1.raw_reads.1.N3: 478 records 2,844 trace bytes
+ LAmerge -v raw_reads.1 raw_reads.1.raw_reads.1.C0.S raw_reads.1.raw_reads.1.N0.S raw_reads.1.raw_reads.1.C1.S raw_reads.1.raw_reads.1.N1.S raw_reads.1.raw_reads.1.C2.S raw_reads.1.raw_reads.1.N2.S raw_reads.1.raw_reads.1.C3.S raw_reads.1.raw_reads.1.N3.S
LAmerge: Out of memory (Allocating LAmerge blocks)
2017-01-12 23:08:47,072 - root - DEBUG - CD: '/tmp/root/pypetmp//falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000' -> '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000'
2017-01-12 23:08:47,072 - root - DEBUG - CD: '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000' -> '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000'
2017-01-12 23:08:47,073 - root - CRITICAL - Error in /falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py with args="{'json_fn': '/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/task.json',\n 'timeout': 60,\n 'tmpdir': '/tmp'}"
Traceback (most recent call last):
File "/usr/local/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/local/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py", line 190, in <module>
main()
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py", line 182, in main
run(**vars(parsed_args))
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py", line 136, in run
run_cfg_in_tmpdir(cfg, tmpdir)
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py", line 160, in run_cfg_in_tmpdir
run_python_func(func, myinputs, myoutputs, parameters)
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_task.py", line 125, in run_python_func
do_support.run_bash(script_fn)
File "/falcon/FALCON-integrate/pypeFLOW/pypeflow/do_support.py", line 51, in run_bash
raise Exception('{} <- {!r}'.format(rc, cmd))
Exception: 256 <- '/bin/bash -vex /tmp/root/pypetmp/falcon/FALCON-integrate/FALCON-examples/run/synth0/0-rawreads/job_0000/rj_0000.sh'
real 0m3.783s
user 0m0.120s
sys 0m0.280s
I can't make it work, and apparently the error is due to LAmerge (the same with before).
LAmerge: Out of memory (Allocating LAmerge blocks)
You should be able to reproduce this problem at the command-line. You should investigate whether this occurs only on the remote machine, or also on the server machine (where you start fc_run). And you should investigate the sizes of input files, and watch the system while it runs to see how much memory you need.
Finally, you can seek help with LAmerge from its author. You can find the DALIGNER repository under the *thegenemyers user on GitHub.
You should be able to reproduce this problem at the command-line.
Yes, I provided you the problem at the command-line (with run.sh). I don't know why this is not the "command-line" mode.
You should investigate whether this occurs only on the remote machine, or also on the server machine (where you start fc_run).
I have only one virtual machine to run falcon, but not two (i.e., remote machine, server machine).
And you should investigate the sizes of input files, and watch the system while it runs to see how much memory you need.
The input file is the synth0 file. It is a public file that everyone can download, but not a private file.
Finally, you can seek help with LAmerge from its author. You can find the DALIGNER repository under the *thegenemyers user on GitHub.
I think this problem still remains. Why this issue is closed? I don't know. Maybe falcon is just a collection of tools, and the administrator here can't help with specific problems? I don't know the rules here, because I am not a bio-scientist but a CS people want to help my bio-friends. I try my best to provide everything that can help to locate the problem.
@fangvv this repository is hosting an open source source software. Like many open source project, some third party tools are used. We can only debug the code if the environment is known. You have the source code and you know your environment, you actually can do more than us. You can run the LAmerge in isolation and pin down the exact point of failure. Also, the code is designed for large genome. We do expect sufficient memory for assembling.
After all, I run it successfully, by allocating more memory (about 6 GB) for the VM. I hope to share with you and other users with the same problem.
After all, I run it successfully, by allocating more memory (about 6 GB) for the VM. I hope to share with you and other users with the same problem.
Thank you.
@fangvv, yes, thanks for sharing. That is interesting. The daligner jobs have their memory limited by -M
, but I suppose the merge jobs might use more. I wouldn't have expected that. Maybe the issue is virtual memory, not resident memory. But it's good to know. Thanks.
Hello,
When I run the travis.sh, I got a failure like this:
Then I check /falcon/FALCON-integrate/FALCON-examples/run/synth0/mypwatcher/jobs/P50ca970331351b, and find the following stderr:
I think maybe it is because of the memory limitation (I don't know how much memory it needs, I am now using a 2GB-mem VM host for practice). I hope that someone can help me on this issue, thanks.