Closed olekto closed 6 years ago
I am facing the same issue with the same version of Falcon unzip. Any advice on how to set grid parameters for this step would be most helpful. Thank you.
I don't have hands-on experience on SLURM. I am not sure about PacBio's support about this line anymore. You might need to add --account
in your job submission script or setting up SBATCH_ACCOUNT
environmental variable for SLURM.
I am having the same issue. For me job type is SGE and job_queue parameter is also correct. I checked ../3-unzip/reads/task.json and I see sge_option defined outside config there. I see the same in files posted by others so I assume that this is not cause. Any advice on further debugging would be most helpful. Thank you.
I have set up sge_option_da, sge_option_la, sge_option_pda etc. correctly, and these work fine. The problem is that the het_call step also is submitted to the cluster, but it does not use any of the defined cluster queue settings. It uses the settings I mentioned above, which I haven't defined and which are wrong.
I haven't been able to track down where in the code this happens, but it does happen and I ask for help finding where so it can be changed into using one of the cluster queue settings tu.hat has been defined.
It is not specific to SLURM, since @pkuerten has the same issue with SGE. It just set up wrong in the code.
Thank you.
phasing_make_het_call
used to be another invocation of pypeflow, in a sub-workflow, which caused lots of problems. I think this will work in the latest tarball (although I may have introduced a new bug of not passing the same SGE parameters to the same programs as before). If you have a problem with the latest binaries, let me know and I can help.
Hi, With the latest binaries, I am facing the same issue but now at an initial step, both dump_rawread_ids and dump_pread_ids have empty sge_options.
Just getting around to this. Deleting the bad binaries. For now, go back to 2017.
Unzip needs to be updated to deal with new Falcon output directories. That will happen today...
The latest binary release should solve this:
I recommend the new pwatching=blocking
. But if you'd rather use the old way (pwatcher=fs_based
, the default), you can still avoid setting all those sge options by using something like sge_option = -pe smp ${NPROC}
. But the old task-specific sge-option overrides should work too. Please let me know if not.
Hi, in my *.cfg file I have grid options for sge_phasing, sge_quiver, sge_track_reads, sge_blasr_aln and sge_hasm which seems to work fine. However, when FALCON-unzip comes to about this place: [INFO]About to submit: Node(3-unzip/0-phasing/002810F/het_call), it tries to submit jobs without proper configuration:
[INFO]starting job Job(jobid='Pf15a6836853c21', cmd='/bin/bash run.sh', rundir='/work/users/olekto/eremar/falcon-unzip/3-unzip/0-phasing/002129F/het_call', options={'job_queue': 'normal', 'sge_option': '', 'job_type': 'slurm'}) [INFO]!/opt/slurm/bin/sbatch -J Pf15a6836853c21 -p normal -D /work/users/olekto/eremar/falcon-unzip/mypwatcher/jobs/Pf15a6836853c21 -o stdout -e stderr --wrap="/bin/bash /work/users/olekto/eremar/falcon-unzip/mypwatcher/wrappers/run-Pf15a6836853c21.bash" sbatch: error: Account specification required, but not provided sbatch: error: Batch job submission failed: Invalid account or account/partition combination specified
How can I set sge_option for 'het_call' or whatever it is really called? I tried looking at the code, but couldn't find the proper place.
I am using the prebuilt binaries called falcon-2017.11.02-16.04-py2.7-ucs4.tar.gz.
Thank you.
Ole