The default behavior for these jobs seems to be to rerun. However, the result appears to be an infinitely rerunning failing job. (Maybe there is an upper limit to the number of reruns, but for two day--long jobs I haven't seen this reach an upper limit).
An easy solution is to allow customizable JOB_TIME_REQ by setting 48h as the default and allowing user overrides by command line option.
I suppose it is also possible to split the genomes into smaller chunks, but I do not know enough about genome-genome alignments to say if that might affect the resulting alignments. If it might affect alignment results, I would prefer increasing JOB_TIME_REQ.
This issue is related to #43 and perhaps #48.
Personally, a very long JOB_TIME_REQ is fine for me, since I run the Nextflow process with local executor within a single large cluster job.
Some doLastzClusterRun jobs exceed the hardcoded process.time of two days.
https://github.com/hillerlab/make_lastz_chains/blob/187e313afc10382fe44c96e47f27c4466d63e114/constants.py#L110
The default behavior for these jobs seems to be to rerun. However, the result appears to be an infinitely rerunning failing job. (Maybe there is an upper limit to the number of reruns, but for two day--long jobs I haven't seen this reach an upper limit).
An easy solution is to allow customizable
JOB_TIME_REQ
by setting 48h as the default and allowing user overrides by command line option.I suppose it is also possible to split the genomes into smaller chunks, but I do not know enough about genome-genome alignments to say if that might affect the resulting alignments. If it might affect alignment results, I would prefer increasing
JOB_TIME_REQ
.This issue is related to #43 and perhaps #48.
Personally, a very long
JOB_TIME_REQ
is fine for me, since I run the Nextflow process with local executor within a single large cluster job.