ComparativeGenomicsToolkit / cactus

Official home of genome aligner based upon notion of Cactus graphs
Other
499 stars 109 forks source link

Hi, I had the following problem while testing four crops.。 #1280

Open bei7777 opened 7 months ago

bei7777 commented 7 months ago

2024-01-29T17:16:05+0800] [MainThread] [I] [toil] Running Toil version 5.12.0-6d5a5b83b649cd8adf34a5cfe89e7690c95189d3 on host appa1. [2024-01-29T17:16:05+0800] [MainThread] [I] [toil.realtimeLogger] Starting real-time logging. [2024-01-29T17:16:07+0800] [MainThread] [I] [toil.leader] Issued job 'progressive_workflow' kind-progressive_workflow/instance-tom5x7f1 v1 w ith job batch system ID: 1 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [], preemptible: False /bin/sh: _toil_worker: command not found [2024-01-29T17:16:07+0800] [Thread-1 ] [E] [toil.batchSystems.singleMachine] Got exit code 127 (indicating failure) from job _toil_worker p rogressive_workflow file:/asnas/zhaowm_bigd/kanghl/work/beishaoqi/cactustest/js kind-progressive_workflow/instance-tom5x7f1. [2024-01-29T17:16:07+0800] [MainThread] [W] [toil.leader] Job failed with exit value 127: 'progressive_workflow' kind-progressive_workflow/i nstance-tom5x7f1 v1 Exit reason: None [2024-01-29T17:16:07+0800] [MainThread] [W] [toil.leader] No log file is present, despite job failing: 'progressive_workflow' kind-progressi ve_workflow/instance-tom5x7f1 v1 [2024-01-29T17:16:07+0800] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'progressive_workflow' kind-progressive_workflow/instance-tom5x7f1 v1 with ID kind-progressive_workflow/instance-tom5x7f1 to 1 [2024-01-29T17:16:07+0800] [MainThread] [I] [toil.leader] 0 jobs are running, 0 jobs are issued and waiting to run [2024-01-29T17:16:07+0800] [MainThread] [I] [toil.leader] Issued job 'progressive_workflow' kind-progressive_workflow/instance-tom5x7f1 v2 w ith job batch system ID: 2 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [], preemptible: False /bin/sh: _toil_worker: command not found [2024-01-29T17:16:07+0800] [Thread-1 ] [E] [toil.batchSystems.singleMachine] Got exit code 127 (indicating failure) from job _toil_worker p rogressive_workflow file:/asnas/zhaowm_bigd/kanghl/work/beishaoqi/cactustest/js kind-progressive_workflow/instance-tom5x7f1. [2024-01-29T17:16:07+0800] [MainThread] [W] [toil.leader] Job failed with exit value 127: 'progressive_workflow' kind-progressive_workflow/i nstance-tom5x7f1 v2 Exit reason: None [2024-01-29T17:16:07+0800] [MainThread] [W] [toil.leader] No log file is present, despite job failing: 'progressive_workflow' kind-progressi ve_workflow/instance-tom5x7f1 v2 [2024-01-29T17:16:07+0800] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'progressive_workflow' kind-progressive_workflow/instance-tom5x7f1 v2 with ID kind-progressive_workflow/instance-tom5x7f1 to 0 [2024-01-29T17:16:08+0800] [MainThread] [W] [toil.leader] Job 'progressive_workflow' kind-progressive_workflow/instance-tom5x7f1 v3 is compl etely failed [2024-01-29T17:16:12+0800] [MainThread] [I] [toil.leader] Finished toil run with 1 failed jobs. [2024-01-29T17:16:12+0800] [MainThread] [I] [toil.leader] Failed jobs at end of the run: 'progressive_workflow' kind-progressive_workflow/in stance-tom5x7f1 v3

Workflow Progress 100%|████████████████████████████████████████████████████████████████████████| 2/2 (2 failures) [00:03<00:00, 0.58 jobs/s] [2024-01-29T17:16:12+0800] [MainThread] [I] [toil.realtimeLogger] Stopping real-time logging server. [2024-01-29T17:16:12+0800] [MainThread] [I] [toil.realtimeLogger] Joining real-time logging server thread. Traceback (most recent call last): File "/xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin-v2.7.0/venv-cactus-v2.7.0/bin/cactus", line 8, in sys.exit(main()) File "/xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin-v2.7.0/venv-cactus-v2.7.0/lib/python3.8/site-packages/cactus/progressive/cactu s_progressive.py", line 436, in main hal_id = toil.start(Job.wrapJobFn(progressive_workflow, options, config_node, mc_tree, og_map, input_seq_id_map)) File "/xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin-v2.7.0/venv-cactus-v2.7.0/lib/python3.8/site-packages/toil/common.py", line 10 64, in start return self._runMainLoop(rootJobDescription) File "/xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin-v2.7.0/venv-cactus-v2.7.0/lib/python3.8/site-packages/toil/common.py", line 15 39, in _runMainLoop return Leader(config=self.config, File "/xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin-v2.7.0/venv-cactus-v2.7.0/lib/python3.8/site-packages/toil/leader.py", line 28 9, in run raise FailedJobsException(self.jobStore, failed_jobs, exit_code=self.recommended_fail_exit_code) toil.exceptions.FailedJobsException: The job store '/asnas/zhaowm_bigd/kanghl/work/beishaoqi/cactustest/js' contains 1 failed jobs: 'progres sive_workflow' kind-progressive_workflow/instance-tom5x7f1 v3 [kanghl@appa1 cactustest]$ Issued job 'progressive_workflow' kind-progressive_workflow/instance-tom5x7f1 v1 w ith job batch system ID: 1 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [], preemptible: False

bei7777 commented 7 months ago

[kanghl@appa1 mashtree]$ /xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin-v2.7.0/venv-cactus-v2.7.0/bin/cactus ./js ./cactus_input_t est33.txt ./output.hal --logDebug [2024-01-29T18:50:05+0800] [MainThread] [D] [toil.statsAndLogging] Suppressing the following loggers: {'urllib3', 'docker', 'asyncio', 'b otocore', 'dill', 'pkg_resources', 'bcdocs', 'setuptools', 'concurrent', 'charset_normalizer', 'boto', 'boto3', 'requests', 'cactus', 'so nLib', 'websocket'} [2024-01-29T18:50:05+0800] [MainThread] [D] [toil.statsAndLogging] Root logger is at level 'DEBUG', 'toil' logger at level 'DEBUG'. [2024-01-29T18:50:05+0800] [MainThread] [D] [toil.lib.threading] Total machine size: 12 cores [2024-01-29T18:50:05+0800] [MainThread] [D] [toil.lib.threading] CPU quota and period available from cgroups v1 [2024-01-29T18:50:05+0800] [MainThread] [D] [toil.lib.threading] CPU quota: -1 period: 100000 [2024-01-29T18:50:05+0800] [MainThread] [I] [toil.statsAndLogging] Enabling realtime logging in Toil [2024-01-29T18:50:05+0800] [MainThread] [I] [toil.statsAndLogging] Cactus Command: /xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin- v2.7.0/venv-cactus-v2.7.0/bin/cactus ./js ./cactus_input_test33.txt ./output.hal --logDebug [2024-01-29T18:50:05+0800] [MainThread] [I] [toil.statsAndLogging] Cactus Commit: 48410bd321f1a814cc6fe5fb0a948e0f507728e7 [2024-01-29T18:50:05+0800] [MainThread] [D] [toil.statsAndLogging] Suppressing the following loggers: {'urllib3', 'docker', 'asyncio', 'b otocore', 'dill', 'pkg_resources', 'bcdocs', 'setuptools', 'concurrent', 'charset_normalizer', 'boto', 'boto3', 'requests', 'cactus', 'so nLib', 'websocket'} [2024-01-29T18:50:05+0800] [MainThread] [D] [toil.statsAndLogging] Root logger is at level 'DEBUG', 'toil' logger at level 'DEBUG'. [2024-01-29T18:50:05+0800] [MainThread] [D] [toil.lib.threading] Total machine size: 12 cores [2024-01-29T18:50:05+0800] [MainThread] [D] [toil.lib.threading] CPU quota and period available from cgroups v1 [2024-01-29T18:50:05+0800] [MainThread] [D] [toil.lib.threading] CPU quota: -1 period: 100000 [2024-01-29T18:50:05+0800] [MainThread] [D] [toil.jobStores.fileJobStore] Path to job store directory is '/asnas/zhaowm_bigd/kanghl/work/ beishaoqi/mashtree/js'. Traceback (most recent call last): File "/xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin-v2.7.0/venv-cactus-v2.7.0/lib/python3.8/site-packages/toil/jobStores/fileJo bStore.py", line 113, in initialize os.mkdir(self.jobStoreDir) FileExistsError: [Errno 17] File exists: '/asnas/zhaowm_bigd/kanghl/work/beishaoqi/mashtree/js'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin-v2.7.0/venv-cactus-v2.7.0/bin/cactus", line 8, in sys.exit(main()) File "/xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin-v2.7.0/venv-cactus-v2.7.0/lib/python3.8/site-packages/cactus/progressive/ca ctus_progressive.py", line 394, in main with Toil(options) as toil: File "/xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin-v2.7.0/venv-cactus-v2.7.0/lib/python3.8/site-packages/toil/common.py", line 971, in enter jobStore.initialize(config) File "/xtdisk/zhaowm_bigd/kanghl/software/Cactus/cactus-bin-v2.7.0/venv-cactus-v2.7.0/lib/python3.8/site-packages/toil/jobStores/fileJo bStore.py", line 116, in initialize raise JobStoreExistsException(self.jobStoreDir) toil.jobStores.abstractJobStore.JobStoreExistsException: The job store '/asnas/zhaowm_bigd/kanghl/work/beishaoqi/mashtree/js' already exi sts. Use --restart to resume the workflow, or remove the job store with 'toil clean' to start the workflow from scratch.

bei7777 commented 7 months ago

When testing with the sample files evolverMammals.txt , I encountered the same issue.

bei7777 commented 7 months ago

![Uploading 1706585400262.png…]()

glennhickey commented 7 months ago

When you see this error message

/bin/sh: _toil_worker: command not found

it means Cactus was not installed properly. Try using the docker image or installing the release exactly as described in the the instructions https://github.com/ComparativeGenomicsToolkit/cactus/releases/tag/v2.7.1

bei7777 commented 7 months ago

When you see this error message

/bin/sh: _toil_worker: command not found

it means Cactus was not installed properly. Try using the docker image or installing the release exactly as described in the the instructions https://github.com/ComparativeGenomicsToolkit/cactus/releases/tag/v2.7.1

Thank you very much for answering my question, I am trying to reinstall