Closed HFzzzzzzz closed 1 year ago
This is a fairly cryptic issue. But I guess you're referring to the MAF block size from hal2maf
? You can increase it with --maxBlockLen
but it's best to use cactus-hal2maf. It runs a normalization to make bigger blocks, though the block size will still be limited by rearrangements etc.
--maxRefGap
doesn't work properly and probably doesn't do what you want even if it did. Please see https://github.com/ComparativeGenomicsToolkit/cactus/blob/master/doc/progressive.md#maf for exporting to MAF.
--maxRefGap
doesn't work properly and probably doesn't do what you want even if it did. Please see https://github.com/ComparativeGenomicsToolkit/cactus/blob/master/doc/progressive.md#maf for exporting to MAF.
Thank you very much, I will try it and give you feedback
This is a fairly cryptic issue. But I guess you're referring to the MAF block size from
hal2maf
? You can increase it with--maxBlockLen
but it's best to use cactus-hal2maf. It runs a normalization to make bigger blocks, though the block size will still be limited by rearrangements etc.
--maxBlockLen I can't find this parameter
--maxRefGap
doesn't work properly and probably doesn't do what you want even if it did. Please see https://github.com/ComparativeGenomicsToolkit/cactus/blob/master/doc/progressive.md#maf for exporting to MAF.
I use command :cactus-hal2maf ./js ./evolverMammals.hal evolverMammals.maf.gz --refGenome simHuman_chr6 --chunkSize 1000000 --noAncestors --onlyOrthologs
[2023-01-12T16:37:05+0800] [MainThread] [I] [toil.statsAndLogging] Setting batchCores to 36 [2023-01-12T16:37:05+0800] [MainThread] [I] [toil.statsAndLogging] Enabling realtime logging in Toil [2023-01-12T16:37:05+0800] [MainThread] [I] [toil.statsAndLogging] Cactus Command: /media/zyh/disk2/cactus/cactus_env/bin/cactus-hal2maf ./jobstore ./evolverMammals.hal evolverMammals.maf.gz --refGenome simHuman_chr6 --chunkSize 1000000 --noAncestors --onlyOrthologs [2023-01-12T16:37:05+0800] [MainThread] [I] [toil.statsAndLogging] Cactus Commit: 6fa6e065f1e688bd50dbe75f9a52ef2cd6ed55d2 [2023-01-12T16:37:05+0800] [MainThread] [I] [toil.statsAndLogging] Using default batch count of 1 [2023-01-12T16:37:05+0800] [MainThread] [I] [toil.statsAndLogging] Importing ./evolverMammals.hal [2023-01-12T16:37:09+0800] [MainThread] [I] [toil.job] Saving graph of 1 jobs, 1 new [2023-01-12T16:37:09+0800] [MainThread] [I] [toil.job] Processing job 'hal2maf_workflow' kind-hal2maf_workflow/instance-wvjibpe2 v0 [2023-01-12T16:37:09+0800] [MainThread] [I] [toil] Running Toil version 5.7.1-b5cae9634820d76cb6c13b2a6312895122017d54 on host gene2. [2023-01-12T16:37:09+0800] [MainThread] [I] [toil.realtimeLogger] Starting real-time logging. [2023-01-12T16:37:09+0800] [MainThread] [I] [toil.leader] Issued job 'hal2maf_workflow' kind-hal2maf_workflow/instance-wvjibpe2 v1 with job batch system ID: 0 and cores: 1, disk: 2.0 Gi, and memory: 2.0 Gi [2023-01-12T16:37:10+0800] [MainThread] [I] [toil.worker] Redirecting logging to /tmp/798b44d89ab850ae8d39978709157e25/5779/worker_log.txt [2023-01-12T16:37:10+0800] [MainThread] [I] [toil-rt] Reading HAL file from job store to /tmp/798b44d89ab850ae8d39978709157e25/5779/5665/tmpt1yv0 kl5/evolverMammals.hal [2023-01-12T16:37:10+0800] [MainThread] [I] [toil-rt] Computing range information [2023-01-12T16:37:10+0800] [MainThread] [I] [toil-rt] 2023-01-12 16:37:10.335839: Running the command: "halStats /tmp/798b44d89ab850ae8d399787091 57e25/5779/5665/tmpt1yv0kl5/evolverMammals.hal --sequenceStats simHuman_chr6" [2023-01-12T16:37:10+0800] [MainThread] [I] [toil-rt] 2023-01-12 16:37:10.417483: Successfully ran: "halStats /tmp/798b44d89ab850ae8d39978709157e 25/5779/5665/tmpt1yv0kl5/evolverMammals.hal --sequenceStats simHuman_chr6" in 0.0808 seconds [2023-01-12T16:37:10+0800] [Thread-4 (statsAndLoggingAggregator)] [W] [toil.statsAndLogging] Got message from job at time 01-12-2023 16:37:10: Jo b used more disk than requested. For CWL, consider increasing the outdirMin requirement, otherwise, consider increasing the disk requirement. Job 'hal2maf_ranges' kind-hal2maf_workflow/instance-wvjibpe2 v1 used 100.19% disk (4.7 MiB [4886528B] used, 4.7 MiB [4877162B] requested). [2023-01-12T16:37:10+0800] [MainThread] [I] [toil.leader] 0 jobs are running, 0 jobs are issued and waiting to run [2023-01-12T16:37:10+0800] [MainThread] [I] [toil.leader] Issued job 'hal2maf_all' kind-hal2maf_all/instance-tw12pdsg v1 with job batch system ID : 1 and cores: 1, disk: 2.0 Gi, and memory: 2.0 Gi [2023-01-12T16:37:10+0800] [MainThread] [I] [toil.worker] Redirecting logging to /tmp/798b44d89ab850ae8d39978709157e25/17db/worker_log.txt [2023-01-12T16:37:10+0800] [MainThread] [I] [toil-rt] Setting batchSize to 1 [2023-01-12T16:37:10+0800] [MainThread] [I] [toil.leader] Issued job 'hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v1 with job batch syste m ID: 2 and cores: 36, disk: 11.6 Mi, and memory: 2.0 Gi [2023-01-12T16:37:11+0800] [MainThread] [I] [toil.worker] Redirecting logging to /tmp/798b44d89ab850ae8d39978709157e25/661e/worker_log.txt [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] Reading HAL file from job store to /tmp/798b44d89ab850ae8d39978709157e25/661e/2180/tmptb7yp jhr/evolverMammals.hal [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] First of 1 commands in parallel batch: set -eo pipefail && (time -p hal2maf evolverMammals .hal stdout --refGenome simHuman_chr6 --refSequence simHuman.chr6 --start 0 --length 601863 --onlyOrthologs --noAncestors) 2> 0.h2m.time | bgzip
0.maf.gz [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] 2023-01-12 16:37:11.255281: Running the command: "bash -c set -eo pipefail && cat /tmp/798b 44d89ab850ae8d39978709157e25/661e/2180/tmptb7ypjhr/hal2maf_cmds.txt | parallel -j 36 '{}'" [2023-01-12T16:37:11+0800] [Thread-1 (daddy)] [E] [toil.batchSystems.singleMachine] Got exit code 1 (indicating failure) from job _toil_worker ha l2maf_batch file:/media/zyh/disk2/cactus/jobstore kind-hal2maf_batch/instance-7ultgxw6. [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.leader] Job failed with exit value 1: 'hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v1 Exit reason: None [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.leader] The job seems to have left a log file, indicating failure: 'hal2maf_batch' kind-hal2maf _batch/instance-7ultgxw6 v2 [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.leader] Log from job "kind-hal2maf_batch/instance-7ultgxw6" follows: =========> [2023-01-12T16:37:11+0800] [MainThread] [I] [toil.worker] ---TOIL WORKER OUTPUT LOG--- [2023-01-12T16:37:11+0800] [MainThread] [I] [toil] Running Toil version 5.7.1-b5cae9634820d76cb6c13b2a6312895122017d54 on host gene2. [2023-01-12T16:37:11+0800] [MainThread] [I] [toil.worker] Working on job 'hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v1 [2023-01-12T16:37:11+0800] [MainThread] [I] [toil.worker] Loaded body Job('hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v1) from d escription 'hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v1 [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] Reading HAL file from job store to /tmp/798b44d89ab850ae8d39978709157e25/661e/2180/ tmptb7ypjhr/evolverMammals.hal [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] First of 1 commands in parallel batch: set -eo pipefail && (time -p hal2maf evolve rMammals.hal stdout --refGenome simHuman_chr6 --refSequence simHuman.chr6 --start 0 --length 601863 --onlyOrthologs --noAncestors) 2> 0.h2m.time | bgzip > 0.maf.gz [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] 2023-01-12 16:37:11.255281: Running the command: "bash -c set -eo pipefail && cat / tmp/798b44d89ab850ae8d39978709157e25/661e/2180/tmptb7ypjhr/hal2maf_cmds.txt | parallel -j 36 '{}'" [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.fileStores.abstractFileStore] Failed job accessed files: [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.fileStores.abstractFileStore] Downloaded file 'files/no-job/file-29fe0c5ae8f24b63900e07 e680ef14f7/evolverMammals.hal' to path '/tmp/798b44d89ab850ae8d39978709157e25/661e/2180/tmptb7ypjhr/evolverMammals.hal' Traceback (most recent call last): File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/toil/worker.py", line 407, in workerScript job._runner(jobGraph=None, jobStore=jobStore, fileStore=fileStore, defer=defer) File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/toil/job.py", line 2406, in _runner returnValues = self._run(jobGraph=None, fileStore=fileStore) File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/toil/job.py", line 2324, in _run return self.run(fileStore) File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/toil/job.py", line 2547, in run rValue = userFunction(*((self,) + tuple(self._args)), **self._kwargs) File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/cactus/maf/cactus_hal2maf.py", line 341, in hal2maf_batch cactus_call(parameters=parallel_cmd, work_dir=work_dir) File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/cactus/shared/common.py", line 814, in cactus_call raise RuntimeError("{}Command {} exited {}: {}".format(sigill_msg, call, process.returncode, out)) RuntimeError: Command ['bash', '-c', "set -eo pipefail && cat /tmp/798b44d89ab850ae8d39978709157e25/661e/2180/tmptb7ypjhr/hal2maf_cmds.tx t | parallel -j 36 '{}'"] exited 127: stdout=None, stderr=bash: line 1: parallel: command not found
[2023-01-12T16:37:11+0800] [MainThread] [E] [toil.worker] Exiting the worker because of a failed job on host gene2
<========= [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'hal2maf_batch' kind-hal2maf _batch/instance-7ultgxw6 v2 with ID kind-hal2maf_batch/instance-7ultgxw6 to 1 [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.job] We have increased the disk of the failed job 'hal2maf_batch' kind-hal2maf_batch/instance-7 ultgxw6 v2 to the default of 2147483648 bytes [2023-01-12T16:37:11+0800] [MainThread] [I] [toil.leader] Issued job 'hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v3 with job batch syste m ID: 3 and cores: 36, disk: 2.0 Gi, and memory: 2.0 Gi [2023-01-12T16:37:11+0800] [MainThread] [I] [toil.worker] Redirecting logging to /tmp/798b44d89ab850ae8d39978709157e25/7876/worker_log.txt [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] Reading HAL file from job store to /tmp/798b44d89ab850ae8d39978709157e25/7876/f3ce/tmp3memp j03/evolverMammals.hal [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] First of 1 commands in parallel batch: set -eo pipefail && (time -p hal2maf evolverMammals .hal stdout --refGenome simHuman_chr6 --refSequence simHuman.chr6 --start 0 --length 601863 --onlyOrthologs --noAncestors) 2> 0.h2m.time | bgzip
0.maf.gz [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] 2023-01-12 16:37:11.745208: Running the command: "bash -c set -eo pipefail && cat /tmp/798b 44d89ab850ae8d39978709157e25/7876/f3ce/tmp3mempj03/hal2maf_cmds.txt | parallel -j 36 '{}'" [2023-01-12T16:37:11+0800] [Thread-1 (daddy)] [E] [toil.batchSystems.singleMachine] Got exit code 1 (indicating failure) from job _toil_worker ha l2maf_batch file:/media/zyh/disk2/cactus/jobstore kind-hal2maf_batch/instance-7ultgxw6. [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.leader] Job failed with exit value 1: 'hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v3 Exit reason: None [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.leader] The job seems to have left a log file, indicating failure: 'hal2maf_batch' kind-hal2maf _batch/instance-7ultgxw6 v5 [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.leader] Log from job "kind-hal2maf_batch/instance-7ultgxw6" follows: =========> [2023-01-12T16:37:11+0800] [MainThread] [I] [toil.worker] ---TOIL WORKER OUTPUT LOG--- [2023-01-12T16:37:11+0800] [MainThread] [I] [toil] Running Toil version 5.7.1-b5cae9634820d76cb6c13b2a6312895122017d54 on host gene2. [2023-01-12T16:37:11+0800] [MainThread] [I] [toil.worker] Working on job 'hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v4 [2023-01-12T16:37:11+0800] [MainThread] [I] [toil.worker] Loaded body Job('hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v4) from d escription 'hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v4 [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] Reading HAL file from job store to /tmp/798b44d89ab850ae8d39978709157e25/7876/f3ce/ tmp3mempj03/evolverMammals.hal [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] First of 1 commands in parallel batch: set -eo pipefail && (time -p hal2maf evolve rMammals.hal stdout --refGenome simHuman_chr6 --refSequence simHuman.chr6 --start 0 --length 601863 --onlyOrthologs --noAncestors) 2> 0.h2m.time | bgzip > 0.maf.gz [2023-01-12T16:37:11+0800] [MainThread] [I] [toil-rt] 2023-01-12 16:37:11.745208: Running the command: "bash -c set -eo pipefail && cat / tmp/798b44d89ab850ae8d39978709157e25/7876/f3ce/tmp3mempj03/hal2maf_cmds.txt | parallel -j 36 '{}'" [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.fileStores.abstractFileStore] Failed job accessed files: [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.fileStores.abstractFileStore] Downloaded file 'files/no-job/file-29fe0c5ae8f24b63900e07 e680ef14f7/evolverMammals.hal' to path '/tmp/798b44d89ab850ae8d39978709157e25/7876/f3ce/tmp3mempj03/evolverMammals.hal' Traceback (most recent call last): File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/toil/worker.py", line 407, in workerScript job._runner(jobGraph=None, jobStore=jobStore, fileStore=fileStore, defer=defer) File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/toil/job.py", line 2406, in _runner returnValues = self._run(jobGraph=None, fileStore=fileStore) File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/toil/job.py", line 2324, in _run return self.run(fileStore) File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/toil/job.py", line 2547, in run rValue = userFunction(*((self,) + tuple(self._args)), **self._kwargs) File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/cactus/maf/cactus_hal2maf.py", line 341, in hal2maf_batch cactus_call(parameters=parallel_cmd, work_dir=work_dir) File "/media/zyh/disk2/cactus/cactus_env/lib/python3.10/site-packages/cactus/shared/common.py", line 814, in cactus_call raise RuntimeError("{}Command {} exited {}: {}".format(sigill_msg, call, process.returncode, out)) RuntimeError: Command ['bash', '-c', "set -eo pipefail && cat /tmp/798b44d89ab850ae8d39978709157e25/7876/f3ce/tmp3mempj03/hal2maf_cmds.tx t | parallel -j 36 '{}'"] exited 127: stdout=None, stderr=bash: line 1: parallel: command not found
[2023-01-12T16:37:11+0800] [MainThread] [E] [toil.worker] Exiting the worker because of a failed job on host gene2
<========= [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'hal2maf_batch' kind-hal2maf _batch/instance-7ultgxw6 v5 with ID kind-hal2maf_batch/instance-7ultgxw6 to 0 [2023-01-12T16:37:11+0800] [MainThread] [W] [toil.leader] Job 'hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v6 is completely failed [2023-01-12T16:37:18+0800] [MainThread] [I] [toil.leader] Finished toil run with 3 failed jobs. [2023-01-12T16:37:18+0800] [MainThread] [I] [toil.leader] Failed jobs at end of the run: 'hal2maf_all' kind-hal2maf_all/instance-tw12pdsg v3 'hal 2maf_ranges' kind-hal2maf_workflow/instance-wvjibpe2 v2 'hal2maf_batch' kind-hal2maf_batch/instance-7ultgxw6 v6
Workflow Progress 100%|████████████████████████████████████████████████████████████████████████████| 4/4 (2 failures) [00:08<00:00, 0.50 jobs/s]
[2023-01-12T16:37:18+0800] [MainThread] [I] [toil.realtimeLogger] Stopping real-time logging server.
[2023-01-12T16:37:19+0800] [MainThread] [I] [toil.realtimeLogger] Joining real-time logging server thread.
Traceback (most recent call last):
File "/media/zyh/disk2/cactus/cactus_env/bin/cactus-hal2maf", line 8, in
[2023-01-12T16:37:11+0800] [MainThread] [E] [toil.worker] Exiting the worker because of a failed job on host gene2
<=========
and use --maxBlockLen
apper:cactus-hal2maf:error:unrecongnized arguments :--maxBlockLen 50000
Good point. cactus-hal2maf
depends on GNU parallel being installed on your system. For example, on ubuntu you can install it with sudo apt install parallel
. It's included in the Cactus Docker images but not in the binary release. I'll see about adding it in the next version. But for now, you will need to install it yourself in order to use this tool.
It's an annoying dependence to have. It's only in there for giant HAL files in order to create large multiprocess Toil jobs, avoiding repeated downloads of Terrabyte-scale data (Toil's caching system doesn't have an interface to handle this). A more portable way may be doing it directly in python with something like a thread pool.
Good point.
cactus-hal2maf
depends on GNU parallel being installed on your system. For example, on ubuntu you can install it withsudo apt install parallel
. It's included in the Cactus Docker images but not in the binary release. I'll see about adding it in the next version. But for now, you will need to install it yourself in order to use this tool.It's an annoying dependence to have. It's only in there for giant HAL files in order to create large multiprocess Toil jobs, avoiding repeated downloads of Terrabyte-scale data (Toil's caching system doesn't have an interface to handle this). A more portable way may be doing it directly in python with something like a thread pool.
It's useful.Whether --maxBlockLen has become the chunkSize parameter
Good point.
cactus-hal2maf
depends on GNU parallel being installed on your system. For example, on ubuntu you can install it withsudo apt install parallel
. It's included in the Cactus Docker images but not in the binary release. I'll see about adding it in the next version. But for now, you will need to install it yourself in order to use this tool.It's an annoying dependence to have. It's only in there for giant HAL files in order to create large multiprocess Toil jobs, avoiding repeated downloads of Terrabyte-scale data (Toil's caching system doesn't have an interface to handle this). A more portable way may be doing it directly in python with something like a thread pool.
cactus-hal2maf: error: unrecognized arguments: --maxBlockLen 5000000
--maxBlockLen
is a hal2maf
option, not cactus-hal2maf
. cactus-hal2maf
makes the longest blocks it can (block length is limited by the data). It will (in many cases) make longer blocks than hal2maf --maxBlockLen 1000000000000000000000000
.
--maxBlockLen
is ahal2maf
option, notcactus-hal2maf
.cactus-hal2maf
makes the longest blocks it can (block length is limited by the data). It will (in many cases) make longer blocks thanhal2maf --maxBlockLen 1000000000000000000000000
.
I have succeeded in some examples, but some have failed, and there are still previous mistakes
[2023-01-12T22:35:52+0800] [MainThread] [I] [toil.statsAndLogging] Setting batchCores to 36 [2023-01-12T22:35:52+0800] [MainThread] [I] [toil.statsAndLogging] Enabling realtime logging in Toil [2023-01-12T22:35:52+0800] [MainThread] [I] [toil.statsAndLogging] Cactus Command: /media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/bin/cactus-hal2maf ./js3 /media/zhf/ext1/new_five/ancestor.hal /media/zhf/ext1/new_five/ancestor.maf.gz --refGenome N0 --chunkSize 10000000 --noAncestors --onlyOrthologs [2023-01-12T22:35:52+0800] [MainThread] [I] [toil.statsAndLogging] Cactus Commit: 47f9079cc31a5533ffb76f038480fdec1b6f7c4f [2023-01-12T22:35:52+0800] [MainThread] [I] [toil.statsAndLogging] Using default batch count of 1 [2023-01-12T22:35:52+0800] [MainThread] [I] [toil.statsAndLogging] Importing /media/zhf/ext1/new_five/ancestor.hal [2023-01-12T22:35:56+0800] [MainThread] [I] [toil.job] Saving graph of 1 jobs, 1 new [2023-01-12T22:35:56+0800] [MainThread] [I] [toil.job] Processing job 'hal2maf_workflow' kind-hal2maf_workflow/instance-ocfe9vzs v0 [2023-01-12T22:35:56+0800] [MainThread] [I] [toil] Running Toil version 5.8.0-79792b70098c4c18d1d2c2832b72085893f878d1 on host zhf-Precision-5820-Tower. [2023-01-12T22:35:56+0800] [MainThread] [I] [toil.realtimeLogger] Starting real-time logging. [2023-01-12T22:35:56+0800] [MainThread] [I] [toil.leader] Issued job 'hal2maf_workflow' kind-hal2maf_workflow/instance-ocfe9vzs v1 with job batch system ID: 0 and disk: 2.0 Gi, m emory: 2.0 Gi, cores: 1, accelerators: [], preemptable: False [2023-01-12T22:35:57+0800] [MainThread] [I] [toil.worker] Redirecting logging to /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/ffaf/worker_log.txt [2023-01-12T22:35:57+0800] [MainThread] [I] [toil-rt] Reading HAL file from job store to /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/ffaf/4b61/tmp9wljx3iz/ancestor.hal [2023-01-12T22:35:57+0800] [MainThread] [I] [toil-rt] Computing range information [2023-01-12T22:35:57+0800] [MainThread] [I] [toil-rt] 2023-01-12 22:35:57.681113: Running the command: "halStats /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/ffaf/4b61/tmp9wljx3iz/ances tor.hal --sequenceStats N0" [2023-01-12T22:35:57+0800] [MainThread] [I] [toil-rt] 2023-01-12 22:35:57.691468: Successfully ran: "halStats /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/ffaf/4b61/tmp9wljx3iz/ancestor .hal --sequenceStats N0" in 0.0055 seconds [2023-01-12T22:35:57+0800] [Thread-4 ] [W] [toil.statsAndLogging] Got message from job at time 01-12-2023 22:35:57: Job used more disk than requested. For CWL, consider increasi ng the outdirMin requirement, otherwise, consider increasing the disk requirement. Job 'hal2maf_ranges' kind-hal2maf_workflow/instance-ocfe9vzs v1 used 100.00% disk (439.6 MiB [4 60976128B] used, 439.6 MiB [460965407B] requested). [2023-01-12T22:35:57+0800] [MainThread] [I] [toil.leader] 0 jobs are running, 0 jobs are issued and waiting to run [2023-01-12T22:35:57+0800] [MainThread] [I] [toil.leader] Issued job 'hal2maf_all' kind-hal2mafall/instance-ja7njf8 v1 with job batch system ID: 1 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [], preemptable: False [2023-01-12T22:35:58+0800] [MainThread] [I] [toil.worker] Redirecting logging to /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/8b09/worker_log.txt [2023-01-12T22:35:58+0800] [MainThread] [I] [toil-rt] Setting batchSize to 13 [2023-01-12T22:35:58+0800] [MainThread] [I] [toil.leader] Issued job 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v1 with job batch system ID: 2 and disk: 1.1 Gi, memory: 2.0 Gi, cores: 36, accelerators: [], preemptable: False [2023-01-12T22:35:58+0800] [MainThread] [I] [toil.worker] Redirecting logging to /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/c8aa/worker_log.txt [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] Reading HAL file from job store to /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/c8aa/7480/tmpqlbsc9r4/ancestor.hal [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] First of 13 commands in parallel batch: set -eo pipefail && (time -p hal2maf ancestor.hal stdout --refGenome N0 --refSequen ce N0refChr0 --start 0 --length 10000000 --onlyOrthologs --noAncestors) 2> 0.h2m.time | bgzip > 0.maf.gz [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] 2023-01-12 22:35:59.019668: Running the command: "bash -c set -eo pipefail && cat /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/c8aa /7480/tmpqlbsc9r4/hal2maf_cmds.txt | parallel -j 36 '{}'" [2023-01-12T22:35:59+0800] [Thread-1 ] [E] [toil.batchSystems.singleMachine] Got exit code 1 (indicating failure) from job _toil_worker hal2maf_batch file:/media/zhf/ext1/cactus -bin-v2.4.0/js3 kind-hal2maf_batch/instance-6x1txvkf. [2023-01-12T22:35:59+0800] [MainThread] [W] [toil.leader] Job failed with exit value 1: 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v1 Exit reason: None [2023-01-12T22:35:59+0800] [MainThread] [W] [toil.leader] The job seems to have left a log file, indicating failure: 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v2 [2023-01-12T22:35:59+0800] [MainThread] [W] [toil.leader] Log from job "kind-hal2maf_batch/instance-6x1txvkf" follows: =========> [2023-01-12T22:35:58+0800] [MainThread] [I] [toil.worker] ---TOIL WORKER OUTPUT LOG--- [2023-01-12T22:35:58+0800] [MainThread] [I] [toil] Running Toil version 5.8.0-79792b70098c4c18d1d2c2832b72085893f878d1 on host zhf-Precision-5820-Tower. [2023-01-12T22:35:58+0800] [MainThread] [I] [toil.worker] Working on job 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v1 [2023-01-12T22:35:59+0800] [MainThread] [I] [toil.worker] Loaded body Job('hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v1) from description 'hal2maf_batch' kind-h al2maf_batch/instance-6x1txvkf v1 [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] Reading HAL file from job store to /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/c8aa/7480/tmpqlbsc9r4/ancestor.hal [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] First of 13 commands in parallel batch: set -eo pipefail && (time -p hal2maf ancestor.hal stdout --refGenome N0 --r efSequence N0refChr0 --start 0 --length 10000000 --onlyOrthologs --noAncestors) 2> 0.h2m.time | bgzip > 0.maf.gz [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] 2023-01-12 22:35:59.019668: Running the command: "bash -c set -eo pipefail && cat /tmp/fc2b53cc30295c7095020b1fa5e2a 1ee/c8aa/7480/tmpqlbsc9r4/hal2maf_cmds.txt | parallel -j 36 '{}'" [2023-01-12T22:35:59+0800] [MainThread] [W] [toil.fileStores.abstractFileStore] Failed job accessed files: [2023-01-12T22:35:59+0800] [MainThread] [W] [toil.fileStores.abstractFileStore] Downloaded file 'files/no-job/file-e60e03ca98684aaf972207cd23f3956b/ancestor.hal' to path '/tmp/fc2b53cc30295c7095020b1fa5e2a1ee/c8aa/7480/tmpqlbsc9r4/ancestor.hal' Traceback (most recent call last): File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/toil/worker.py", line 403, in workerScript job._runner(jobGraph=None, jobStore=jobStore, fileStore=fileStore, defer=defer) File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/toil/job.py", line 2727, in _runner returnValues = self._run(jobGraph=None, fileStore=fileStore) File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/toil/job.py", line 2644, in _run return self.run(fileStore) File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/toil/job.py", line 2875, in run rValue = userFunction(*((self,) + tuple(self._args)), *self._kwargs) File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/cactus/maf/cactus_hal2maf.py", line 341, in hal2maf_batch cactus_call(parameters=parallel_cmd, work_dir=work_dir) File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/cactus/shared/common.py", line 824, in cactus_call raise RuntimeError("{}Command {} exited {}: {}".format(sigill_msg, call, process.returncode, out)) RuntimeError: Command ['bash', '-c', "set -eo pipefail && cat /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/c8aa/7480/tmpqlbsc9r4/hal2maf_cmds.txt | parallel -j 36 '{}'"] exited 13: stdout=None, stderr= [2023-01-12T22:35:59+0800] [MainThread] [E] [toil.worker] Exiting the worker because of a failed job on host zhf-Precision-5820-Tower <========= [2023-01-12T22:35:59+0800] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v2 with ID kind-hal2maf_batch/instance-6x1txvkf to 1 [2023-01-12T22:35:59+0800] [MainThread] [W] [toil.job] We have increased the disk of the failed job 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v2 to the default of 2147 483648 bytes [2023-01-12T22:35:59+0800] [MainThread] [I] [toil.leader] Issued job 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v3 with job batch system ID: 3 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 36, accelerators: [], preemptable: False [2023-01-12T22:35:59+0800] [MainThread] [I] [toil.worker] Redirecting logging to /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/77df/worker_log.txt [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] Reading HAL file from job store to /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/77df/dc6d/tmpjeuoqfmb/ancestor.hal [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] First of 13 commands in parallel batch: set -eo pipefail && (time -p hal2maf ancestor.hal stdout --refGenome N0 --refSequen ce N0refChr0 --start 0 --length 10000000 --onlyOrthologs --noAncestors) 2> 0.h2m.time | bgzip > 0.maf.gz [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] 2023-01-12 22:35:59.832985: Running the command: "bash -c set -eo pipefail && cat /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/77df /dc6d/tmpjeuoqfmb/hal2maf_cmds.txt | parallel -j 36 '{}'" [2023-01-12T22:36:00+0800] [Thread-1 ] [E] [toil.batchSystems.singleMachine] Got exit code 1 (indicating failure) from job _toil_worker hal2maf_batch file:/media/zhf/ext1/cactus -bin-v2.4.0/js3 kind-hal2maf_batch/instance-6x1txvkf. [2023-01-12T22:36:00+0800] [MainThread] [W] [toil.leader] Job failed with exit value 1: 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v3 Exit reason: None [2023-01-12T22:36:00+0800] [MainThread] [W] [toil.leader] The job seems to have left a log file, indicating failure: 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v5 [2023-01-12T22:36:00+0800] [MainThread] [W] [toil.leader] Log from job "kind-hal2maf_batch/instance-6x1txvkf" follows: =========> [2023-01-12T22:35:59+0800] [MainThread] [I] [toil.worker] ---TOIL WORKER OUTPUT LOG--- [2023-01-12T22:35:59+0800] [MainThread] [I] [toil] Running Toil version 5.8.0-79792b70098c4c18d1d2c2832b72085893f878d1 on host zhf-Precision-5820-Tower. [2023-01-12T22:35:59+0800] [MainThread] [I] [toil.worker] Working on job 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v4 [2023-01-12T22:35:59+0800] [MainThread] [I] [toil.worker] Loaded body Job('hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v4) from description 'hal2maf_batch' kind-h al2maf_batch/instance-6x1txvkf v4 [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] Reading HAL file from job store to /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/77df/dc6d/tmpjeuoqfmb/ancestor.hal [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] First of 13 commands in parallel batch: set -eo pipefail && (time -p hal2maf ancestor.hal stdout --refGenome N0 --r efSequence N0refChr0 --start 0 --length 10000000 --onlyOrthologs --noAncestors) 2> 0.h2m.time | bgzip > 0.maf.gz [2023-01-12T22:35:59+0800] [MainThread] [I] [toil-rt] 2023-01-12 22:35:59.832985: Running the command: "bash -c set -eo pipefail && cat /tmp/fc2b53cc30295c7095020b1fa5e2a 1ee/77df/dc6d/tmpjeuoqfmb/hal2maf_cmds.txt | parallel -j 36 '{}'" [2023-01-12T22:35:59+0800] [MainThread] [W] [toil.fileStores.abstractFileStore] Failed job accessed files: [2023-01-12T22:35:59+0800] [MainThread] [W] [toil.fileStores.abstractFileStore] Downloaded file 'files/no-job/file-e60e03ca98684aaf972207cd23f3956b/ancestor.hal' to path '/tmp/fc2b53cc30295c7095020b1fa5e2a1ee/77df/dc6d/tmpjeuoqfmb/ancestor.hal' Traceback (most recent call last): File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/toil/worker.py", line 403, in workerScript job._runner(jobGraph=None, jobStore=jobStore, fileStore=fileStore, defer=defer) File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/toil/job.py", line 2727, in _runner returnValues = self._run(jobGraph=None, fileStore=fileStore) File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/toil/job.py", line 2644, in _run return self.run(fileStore) File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/toil/job.py", line 2875, in run rValue = userFunction(((self,) + tuple(self._args)), **self._kwargs) File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/cactus/maf/cactus_hal2maf.py", line 341, in hal2maf_batch cactus_call(parameters=parallel_cmd, work_dir=work_dir) File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/lib/python3.8/site-packages/cactus/shared/common.py", line 824, in cactus_call raise RuntimeError("{}Command {} exited {}: {}".format(sigill_msg, call, process.returncode, out)) RuntimeError: Command ['bash', '-c', "set -eo pipefail && cat /tmp/fc2b53cc30295c7095020b1fa5e2a1ee/77df/dc6d/tmpjeuoqfmb/hal2maf_cmds.txt | parallel -j 36 '{}'"] exited 13: stdout=None, stderr= [2023-01-12T22:35:59+0800] [MainThread] [E] [toil.worker] Exiting the worker because of a failed job on host zhf-Precision-5820-Tower <========= [2023-01-12T22:36:00+0800] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v5 with ID kind-hal2maf_batch/instance-6x1txvkf to 0 [2023-01-12T22:36:00+0800] [MainThread] [W] [toil.leader] Job 'hal2maf_batch' kind-hal2maf_batch/instance-6x1txvkf v6 is completely failed [2023-01-12T22:36:07+0800] [MainThread] [I] [toil.leader] Finished toil run with 3 failed jobs. [2023-01-12T22:36:07+0800] [MainThread] [I] [toil.leader] Failed jobs at end of the run: 'hal2maf_all' kind-hal2mafall/instance-ja7njf8 v3 'hal2maf_batch' kind-hal2maf_batch/in stance-6x1txvkf v6 'hal2maf_ranges' kind-hal2maf_workflow/instance-ocfe9vzs v2
Workflow Progress 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 (2 failures) [00:09<00:00, 0.43 jobs/s]
[2023-01-12T22:36:07+0800] [MainThread] [I] [toil.realtimeLogger] Stopping real-time logging server.
[2023-01-12T22:36:08+0800] [MainThread] [I] [toil.realtimeLogger] Joining real-time logging server thread.
Traceback (most recent call last):
File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/bin/cactus-hal2maf", line 8, in
--maxBlockLen
is ahal2maf
option, notcactus-hal2maf
.cactus-hal2maf
makes the longest blocks it can (block length is limited by the data). It will (in many cases) make longer blocks thanhal2maf --maxBlockLen 1000000000000000000000000
.
I should add that the maf block length is often drastically shortened by the presence of duplications. We have a filter to address this (greedily remove block-breaking duplications): https://github.com/ComparativeGenomicsToolkit/taffy/pull/15 but it's not yet released.
--maxBlockLen
is ahal2maf
option, notcactus-hal2maf
.cactus-hal2maf
makes the longest blocks it can (block length is limited by the data). It will (in many cases) make longer blocks thanhal2maf --maxBlockLen 1000000000000000000000000
.I should add that the maf block length is often drastically shortened by the presence of duplications. We have a filter to address this (greedily remove block-breaking duplications): ComparativeGenomicsToolkit/taffy#15 but it's not yet released.
Can you help me see why this is wrong? I can do it in examples, but I can't change a hal file。it's wrong
Workflow Progress 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 (2 failures) [00:45<00:00, 0.09 jobs/s]
[2023-01-12T22:44:21+0800] [MainThread] [I] [toil.realtimeLogger] Stopping real-time logging server.
[2023-01-12T22:44:22+0800] [MainThread] [I] [toil.realtimeLogger] Joining real-time logging server thread.
Traceback (most recent call last):
File "/media/zhf/ext1/cactus-bin-v2.4.0/cactus_env4/bin/cactus-hal2maf", line 8, in
It's probably running out of memory. The log isn't very helpful. It's using stderr to log the time and memory consumption. But in the event of a fatal crash, it should print all the stderrs somehow. I will see about fixing this.
OK, the logging should be fixed by https://github.com/ComparativeGenomicsToolkit/cactus/pull/901/commits/c7db27c315b35203d9cee56449e686d4c470aeb8 that will dump all the stderr to the logs in the event of a crash.
You can apply it as a patch to your cactus by finding cactus_hal2maf.py
in your cactus virtualenv directory, and replacing it with the new version from https://github.com/ComparativeGenomicsToolkit/cactus/blob/c7db27c315b35203d9cee56449e686d4c470aeb8/src/cactus/maf/cactus_hal2maf.py
OK, the logging should be fixed by c7db27c that will dump all the stderr to the logs in the event of a crash.
You can apply it as a patch to your cactus by finding
cactus_hal2maf.py
in your cactus virtualenv directory, and replacing it with the new version from https://github.com/ComparativeGenomicsToolkit/cactus/blob/c7db27c315b35203d9cee56449e686d4c470aeb8/src/cactus/maf/cactus_hal2maf.py
I tried this method, but the maximum length of the generated maf file is 1000+. I have seen a similar problem before,#674 and my comparison should be very long. It may reach more than ten M.How should I solve it
OK, the logging should be fixed by c7db27c that will dump all the stderr to the logs in the event of a crash.
You can apply it as a patch to your cactus by finding
cactus_hal2maf.py
in your cactus virtualenv directory, and replacing it with the new version from https://github.com/ComparativeGenomicsToolkit/cactus/blob/c7db27c315b35203d9cee56449e686d4c470aeb8/src/cactus/maf/cactus_hal2maf.py
I used cactus—haf2maf did not get a very long comparison speed, the longest one was 1000+bp and did not reach 2000bp,but using hal2maf it reached a length of 10+M.It is amazing
This is my command: cactus-hal2maf ./js5 /media/zhf/ext1/new_five/ancestor.hal /media/zhf/ext1/new_five/ancestor_max_gap.hal.maf.gz --refGenome N0 --chunkSize 100000000 --maxRefGap 100000000 --onlyOrthologs
hal2maf /media/zhf/ext1/new_five/ancestor.hal /media/zhf/ext1/new_five/ancestor__haf2maf.hal.maf --refGenome N0 --maxBlockLen 100000000 --maxRefGap 100000000 --onlyOrthologs
I'm sorry these tools aren't living up to your expectations. A couple of final comments:
cactus-hal2maf
take steps to reduce the number of small, spurious rearrangements found. The resulting MAFs will have much fewer blocks than when #674 was written.--maxRefGap
does not work well and should be avoided.
I use cactus to compare the maize genome, but the longest comparison result is only 1000bp. What parameters should I modify to change the length of the comparison result?