Closed evcurran closed 1 year ago
This is coming from here which seems like it's trying to handle and interface change between Singularity 2.3 and 3:
# --size is deprecated starting in 2.4, but is needed for 2.3 support. Keeping it in for now.
try:
subprocess.check_call(["singularity", "pull", "--size", "2000", "--name", os.path.basename(imgPath),
"docker://" + getDockerImage()])
except subprocess.CalledProcessError:
# Call failed, try without --size, required for singularity 3+
subprocess.check_call(["singularity", "pull", "--name", os.path.basename(imgPath),
"docker://" + getDockerImage()])
Looking at this code, I think the error in your log about "pull" is expected (if terribly confusing). It tries to run with the old interface, fails, then runs with the new interface. And, since there's no subsequent error, presumably it succeeds (printing the INFO: Using cached SIF image
).
So I think it's failing for a reason that's not directly related to above. Do you have any log messages that you didn't share? Also, there's this warning
[2023-01-10T10:03:36+0000] [MainThread] [W] [toil.common] Batch system does not support auto-deployment. The user script ModuleDescriptor(dirPath='/gpfs01/home/mbzec/cactus/cactus_env/lib/python3.10/site-packages', name='cactus.refmap.cactus_minigraph', fromVirtualEnv=True) will have to be present at the same location on every worker.
which wants to make sure that all your worker nodes have access to a shared filesystem where your cactus installation (python), workdir and jobstore are. Is this the case in your setup?
Hi, thanks for the fast response! I'm not sure why worker nodes wouldn't have access to the cactus installation, but I don't know how to check that. Those are all the log messages that came out in the error file.
I had to install everything (including virtualenv) inside a conda environment due to lack of permissions, so I wonder if the nested virtual environments might be causing a problem.
For reference, here is the full submission script I used, in case it helps:
#!/bin/bash
#SBATCH --job-name=cactus_minigraph
#SBATCH --partition=hmemq
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --mem=500g
#SBATCH --time=168:00:00
#SBATCH --output=./%x.out
#SBATCH --error=./%x.err
source $HOME/.bash_profile
source ~/builds/conda-local/pangenome/envs/etc/profile.d/conda.sh
conda activate cactus_env
source ~/cactus/cactus_env/bin/activate
cactus-minigraph --binariesMode singularity --batchSystem slurm ./jobstore arenosa_seqfile.txt arenosa.sv.gfa.gz --reference Aare
I've seen people have issues with Cactus inside conda before, so I think it could be something in the environment. Unfortunately, as someone without either conda or a slurm cluster handy, I don't think I can say much more. Sometimes running with --logDebug
can print something helpful from Toil.
This seems to be a very similar problem (conda + slurm) to the issue right below: #894
I have since managed to get virtualenv installed outside of conda, however I've now run into an installation issue.
When I run this line:
python3 -m pip install -U -r ./toil-requirement.txt
I get the following error:
ERROR: Could not find a version that satisfies the requirement toil[aws]==5.8.0 (from versions: 3.0.6a1.dev377, 3.0.6, 3.0.7.dev2, 3.0.7a1.dev1, 3.0.7, 3.0.8a1.dev4, 3.1.0.dev1, 3.1.0a1.dev22, 3.1.0a1.dev23, 3.1.0a1.dev24, 3.1.0a1.dev25, 3.1.0a1.dev29, 3.1.0a1.dev30, 3.1.0a1.dev31, 3.1.0a1.dev34, 3.1.0a1.dev35, 3.1.0a1.dev37, 3.1.0a1.dev38, 3.1.0a1.dev39, 3.1.0a1.dev40, 3.1.0a1.dev41, 3.1.0a1.dev42, 3.1.0a1.dev44, 3.1.0a1.dev45, 3.1.0a1.dev48, 3.1.0a1.dev50, 3.1.0a1.dev51, 3.1.0a1.dev52, 3.1.0b1.dev53, 3.1.0b1.dev54, 3.1.0b1.dev55, 3.1.0b1.dev56, 3.1.0b1.dev57, 3.1.0b1.dev58, 3.1.0b1.dev59, 3.1.0b1.dev60, 3.1.0b1.dev61, 3.1.0b1.dev62, 3.1.0b1.dev63, 3.1.0b1.dev64, 3.1.0b1.dev65, 3.1.0b1.dev66, 3.1.0b1.dev67, 3.1.0b1.dev68, 3.1.0b1.dev69, 3.1.0b1.dev70, 3.1.0b1.dev71, 3.1.0b1.dev73, 3.1.0b1.dev74, 3.1.0b1.dev75, 3.1.0b1.dev76, 3.1.0b1.dev78, 3.1.0b1.dev79, 3.1.0b1.dev80, 3.1.0b1.dev81, 3.1.0b1.dev82, 3.1.0, 3.1.1a2.dev6, 3.1.1a2.dev7, 3.1.1, 3.1.2a1.dev12, 3.1.2a1.dev13, 3.1.2a1.dev14, 3.1.2a1.dev15, 3.1.2a1.dev16, 3.1.2a1.dev17, 3.1.2a1.dev18, 3.1.2a1.dev20, 3.1.2, 3.1.3a1.dev23, 3.1.3a1.dev24, 3.1.3, 3.1.4a1.dev27, 3.1.4a1.dev28, 3.1.4a1.dev29, 3.1.4a1.dev30, 3.1.4, 3.1.5a1.dev32, 3.1.5, 3.1.6, 3.1.7a1.dev3, 3.2.0a2.dev87, 3.2.0a2.dev88, 3.2.0a2.dev89, 3.2.0a2.dev90, 3.2.0a2.dev93, 3.2.0a2.dev94, 3.2.0a2.dev95, 3.2.0a2.dev96, 3.2.0a2.dev97, 3.2.0a2.dev98, 3.2.0a2.dev99, 3.2.0a2.dev100, 3.2.0a2.dev101, 3.2.0a2.dev102, 3.2.0a2.dev103, 3.2.0a2.dev104, 3.2.0a2.dev105, 3.2.0a2.dev106, 3.2.0a2.dev107, 3.2.0a2.dev110, 3.2.0a2.dev111, 3.2.0a2.dev112, 3.2.0a2.dev113, 3.2.0a2.dev114, 3.2.0a2.dev115, 3.2.0a2.dev116, 3.2.0a2.dev117, 3.2.0a2.dev118, 3.2.0a2.dev119, 3.2.0a2.dev120, 3.2.0a2.dev122, 3.2.0a2.dev123, 3.2.0a2.dev124, 3.2.0a2.dev125, 3.2.0a2.dev126, 3.2.0a2.dev127, 3.2.0a2.dev128, 3.2.0a2.dev133, 3.2.0a2.dev134, 3.2.0a2.dev135, 3.2.0a2.dev137, 3.2.0a2.dev138, 3.2.0a2.dev139, 3.2.0a2.dev140, 3.2.0a2.dev141, 3.2.0a2.dev142, 3.2.0a2.dev143, 3.2.0a2.dev144, 3.2.0a2.dev145, 3.2.0a2.dev146, 3.2.0a2.dev147, 3.2.0a2.dev149, 3.2.0a2.dev150, 3.2.0a2.dev151, 3.2.0a2.dev152, 3.2.0a2.dev156, 3.2.0a2.dev157, 3.2.0a2.dev170, 3.2.0a2.dev172, 3.2.0a2.dev173, 3.2.0a2.dev175, 3.2.0a2.dev176, 3.2.0a2.dev177, 3.2.0a2.dev178, 3.2.0a2.dev180, 3.2.0a2.dev182, 3.2.0a2.dev183, 3.2.0a2.dev184, 3.2.0a2.dev185, 3.2.0a2.dev188, 3.2.0a2.dev189, 3.2.0a2.dev190, 3.2.0a2.dev191, 3.2.0a2.dev192, 3.2.0a2.dev193, 3.2.0a2.dev194, 3.2.0a2.dev195, 3.2.0a2.dev196, 3.2.0a2.dev198, 3.2.0, 3.2.1a1.dev3, 3.2.1, 3.2.2a1.dev9, 3.2.2a1.dev10, 3.3.0a1.dev199, 3.3.0a1.dev200, 3.3.0a1.dev202, 3.3.0a1.dev204, 3.3.0a1.dev205, 3.3.0a1.dev206, 3.3.0a1.dev207, 3.3.0a1.dev208, 3.3.0a1.dev209, 3.3.0a1.dev210, 3.3.0a1.dev211, 3.3.0a1.dev212, 3.3.0a1.dev213, 3.3.0a1.dev214, 3.3.0a1.dev215, 3.3.0, 3.3.1a1.dev4, 3.3.1, 3.3.2a1.dev7, 3.3.3a1.dev11, 3.3.3, 3.3.4, 3.3.5a1.dev15, 3.3.5a1.dev16, 3.3.5, 3.3.6a1.dev18, 3.4.0a1.dev216, 3.4.0a1.dev217, 3.4.0a1.dev218, 3.4.0a1.dev219, 3.4.0a1.dev227, 3.4.0a1.dev228, 3.5.0a1.dev229, 3.5.0a1.dev230, 3.5.0a1.dev231, 3.5.0a1.dev232, 3.5.0a1.dev233, 3.5.0a1.dev234, 3.5.0a1.dev235, 3.5.0a1.dev236, 3.5.0a1.dev237, 3.5.0a1.dev241, 3.5.0a1.dev242, 3.5.0a1.dev243, 3.5.0a1.dev244, 3.5.0a1.dev245, 3.5.0a1.dev246, 3.5.0a1.dev247, 3.5.0a1.dev249, 3.5.0a1.dev250, 3.5.0a1.dev251, 3.5.0a1.dev252, 3.5.0a1.dev253, 3.5.0a1.dev254, 3.5.0a1.dev255, 3.5.0a1.dev256, 3.5.0a1.dev257, 3.5.0a1.dev259, 3.5.0a1.dev260, 3.5.0a1.dev261, 3.5.0a1.dev262, 3.5.0a1.dev263, 3.5.0a1.dev264, 3.5.0a1.dev265, 3.5.0a1.dev266, 3.5.0a1.dev267, 3.5.0a1.dev268, 3.5.0a1.dev269, 3.5.0a1.dev270, 3.5.0a1.dev271, 3.5.0a1.dev272, 3.5.0a1.dev273, 3.5.0a1.dev274, 3.5.0a1.dev275, 3.5.0a1.dev276, 3.5.0a1.dev277, 3.5.0a1.dev278, 3.5.0a1.dev279, 3.5.0a1.dev281, 3.5.0a1.dev282, 3.5.0a1.dev283, 3.5.0a1.dev284, 3.5.0a1.dev285, 3.5.0a1.dev288, 3.5.0a1.dev289, 3.5.0a1.dev290, 3.5.0a1.dev291, 3.5.0a1.dev292, 3.5.0a1.dev294, 3.5.0a1.dev295, 3.5.0a1.dev296, 3.5.0a1.dev298, 3.5.0a1.dev299, 3.5.0a1.dev300, 3.5.0a1.dev301, 3.5.0a1.dev302, 3.5.0a1.dev303, 3.5.0a1.dev304, 3.5.0a1.dev305, 3.5.0a1.dev306, 3.5.0a1.dev307, 3.5.0a1.dev308, 3.5.0a1.dev309, 3.5.0a1.dev310, 3.5.0a1.dev311, 3.5.0a1.dev312, 3.5.0a1.dev313, 3.5.0a1.dev314, 3.5.0a1.dev315, 3.5.0a1.dev316, 3.5.0a1.dev317, 3.5.0a1.dev318, 3.5.0a1.dev319, 3.5.0a1.dev320, 3.5.0a1.dev321, 3.5.0a1.dev322, 3.5.0a1.dev323, 3.5.0a1.dev324, 3.5.0a1.dev325, 3.5.0a1.dev326, 3.5.0, 3.5.1a1.dev6, 3.5.1a1.dev7, 3.5.1, 3.5.2a1.dev12, 3.5.2a1.dev14, 3.5.2a1.dev15, 3.5.2, 3.5.3a1.dev17, 3.6.0a1.dev327, 3.6.0a1.dev328, 3.6.0a1.dev329, 3.6.0a1.dev330, 3.6.0a1.dev332, 3.6.0a1.dev333, 3.6.0a1.dev334, 3.6.0a1.dev335, 3.6.0a1.dev336, 3.6.0a1.dev337, 3.6.0a1.dev338, 3.6.0a1.dev339, 3.6.0a1.dev340, 3.6.0a1.dev341, 3.6.0a1.dev342, 3.6.0, 3.6.1a1.dev3, 3.7.0a1.dev344, 3.7.0a1.dev345, 3.7.0a1.dev346, 3.7.0a1.dev347, 3.7.0a1.dev348, 3.7.0a1.dev349, 3.7.0a1.dev350, 3.7.0a1.dev351, 3.7.0a1.dev352, 3.7.0a1.dev353, 3.7.0a1.dev355, 3.7.0a1.dev356, 3.7.0a1.dev357, 3.7.0a1.dev358, 3.7.0a1.dev359, 3.7.0a1.dev360, 3.7.0a1.dev361, 3.7.0a1.dev362, 3.7.0a1.dev363, 3.7.0a1.dev364, 3.7.0a1.dev365, 3.7.0a1.dev366, 3.7.0a1.dev367, 3.7.0a1.dev368, 3.7.0a1.dev369, 3.7.0a1.dev370, 3.7.0a1.dev371, 3.7.0a1.dev372, 3.7.0a1.dev373, 3.7.0a1.dev374, 3.7.0a1.dev375, 3.7.0a1.dev377, 3.7.0a1.dev378, 3.7.0a1.dev379, 3.7.0a1.dev380, 3.7.0a1.dev381, 3.7.0a1.dev382, 3.7.0a1.dev383, 3.7.0a1.dev384, 3.7.0a1.dev385, 3.7.0a1.dev386, 3.7.0a1.dev387, 3.7.0a1.dev388, 3.7.0a1.dev389, 3.7.0a1.dev390, 3.7.0a1.dev391, 3.7.0a1.dev392, 3.7.0, 3.7.1a1.dev2, 3.8.0a1.dev383, 3.8.0a1.dev385, 3.8.0a1.dev386, 3.8.0a1.dev387, 3.8.0a1.dev388, 3.8.0a1.dev389, 3.8.0a1.dev390, 3.8.0a1.dev391, 3.8.0a1.dev392, 3.8.0a1.dev393, 3.8.0a1.dev395, 3.8.0a1.dev396, 3.8.0a1.dev397, 3.8.0, 3.9.0a1.dev398, 3.9.0a1.dev399, 3.9.0a1.dev402, 3.9.0a1.dev403, 3.9.0a1.dev404, 3.9.0a1.dev405, 3.9.0a1.dev408, 3.9.0a1.dev409, 3.9.0a1.dev410, 3.9.0a1.dev411, 3.9.0a1.dev412, 3.9.0a1.dev413, 3.9.1a1.dev3, 3.9.1, 3.10.0a1.dev421, 3.10.0a1.dev422, 3.10.0a1.dev424, 3.10.0a1.dev426, 3.10.0a1.dev427, 3.10.0a1.dev428, 3.10.0a1.dev429, 3.10.0a1.dev431, 3.10.0a1.dev437, 3.10.0a1.dev438, 3.10.0a1.dev440, 3.10.0a1.dev441, 3.10.0a1.dev442, 3.10.0a1.dev443, 3.10.0a1.dev444, 3.10.0a1.dev445, 3.10.0, 3.10.1, 3.11.0a1, 3.11.0, 3.12.0, 3.13.0, 3.14.0, 3.15.0, 3.16.0a1.dev12345, 3.16.0, 3.17.0, 3.18.0, 3.19.0, 3.20.0, 3.21.0, 3.22.0, 3.22.1a1, 3.22.1a2, 3.22.1a3, 3.22.1a4, 3.22.1a5, 3.23.1, 3.24.0, 4.0.0, 4.1.0, 4.2.0, 5.0.0, 5.1.0, 5.2.0, 5.3.0, 5.4.0, 5.5.0, 5.6.0a1, 5.6.0, 5.7.0a1)
But I see that the toil requirement was updated last week, could this be solved by installing an earlier version instead, or will that cause more problems down the line?
Many thanks!
Goodness. This is a new one. Toil 5.8.0, though quite new, is definitely on Pypi: https://pypi.org/project/toil/. It looks like your pip isn't even finding the release before that, 5.7.1. You may try upgrading pip (python3 -m pip install --upgrade pip`) but other than that you may have to talk to your sysadmin.
Another idea would be to install toil from github: pip install git+https://github.com/DataBiosphere/toil.git@releases/5.8.0
I don't think Cactus v2.4.0 will work with any other version of Toil besides 5.8.0.
Cactus v2.3.1 should work with Toil 5.6.0
Thank you for this, the pip install from github worked when combined with a newer version of python.
Unfortunately, I'm having problems with singularity again now... I have access to version 3.4.2, which according to the README should work. But I'm getting this error, which is similar to one I posted above, but it doesn't seem to be solving it, as indicated by the line INFO: Using cached SIF image
The error is as follows:
/gpfs01/software/easybuild-uon/software/Python/3.10.4-GCCcore-11.3.0/lib/python3.10/site-packages/paramiko/transport.py:236: CryptographyDeprecationWarning: Blowfish has been deprecated
"class": algorithms.Blowfish,
[2023-01-16T14:23:17+0000] [MainThread] [I] [toil.statsAndLogging] Enabling realtime logging in Toil
[2023-01-16T14:23:17+0000] [MainThread] [I] [toil.statsAndLogging] Cactus Command: /gpfs01/home/mbzec/.local/bin/cactus-minigraph --binariesMode singularity --batchSystem slurm ./jobstore arenosa_seqfile.txt arenosa.sv.gfa.gz --reference Aare
[2023-01-16T14:23:17+0000] [MainThread] [I] [toil.statsAndLogging] Cactus Commit: 0d276dfb50ca3e2989fa8973d91867f9cf7a14db
Error for command "pull": unknown flag: --size
Options for pull command:
--arch string architecture to pull from library (default "amd64")
--dir string download images to the specific directory
--disable-cache dont use cached images/blobs and dont create them
--docker-login login to a Docker Repository interactively
-F, --force overwrite an image file if it exists
-h, --help help for pull
--library string download images from the provided library
(default "https://library.sylabs.io")
--no-cleanup do NOT clean up bundle after failed build, can be
helpul for debugging
--nohttps do NOT use HTTPS with the docker:// transport
(useful for local docker registries without a
certificate)
Run 'singularity pull --help' for more detailed usage information.
[34mINFO: [0m Converting OCI blobs to SIF format
[31mFATAL: [0m While making image from oci registry: while building SIF from layers: unable to create new build: while searching for mksquashfs: exec: "mksquashfs": executable file not found in $PATH
Traceback (most recent call last):
File "/gpfs01/home/mbzec/.local/lib/python3.10/site-packages/cactus/shared/common.py", line 430, in importSingularityImage
subprocess.check_call(["singularity", "pull", "--size", "2000", "--name", os.path.basename(imgPath),
File "/gpfs01/software/easybuild-uon/software/Python/3.10.4-GCCcore-11.3.0/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['singularity', 'pull', '--size', '2000', '--name', 'cactus.img', 'docker://quay.io/comparative-genomics-toolkit/cactus:0d276dfb50ca3e2989fa8973d91867f9cf7a14db']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/gpfs01/home/mbzec/.local/bin/cactus-minigraph", line 8, in <module>
sys.exit(main())
File "/gpfs01/home/mbzec/.local/lib/python3.10/site-packages/cactus/refmap/cactus_minigraph.py", line 75, in main
importSingularityImage(options)
File "/gpfs01/home/mbzec/.local/lib/python3.10/site-packages/cactus/shared/common.py", line 434, in importSingularityImage
subprocess.check_call(["singularity", "pull", "--name", os.path.basename(imgPath),
File "/gpfs01/software/easybuild-uon/software/Python/3.10.4-GCCcore-11.3.0/lib/python3.10/subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['singularity', 'pull', '--name', 'cactus.img', 'docker://quay.io/comparative-genomics-toolkit/cactus:0d276dfb50ca3e2989fa8973d91867f9cf7a14db']' returned non-zero exit status 255.
Thanks for your help and patience with this!
I think the important error this time is exec: "mksquashfs": executable file not found in $PATH
. It seems that this is some kind of prerequesite for singularity build
that you need to have installed in order for it to work? https://docs.sylabs.io/guides/3.4/user-guide/quick_start.html#install-system-dependencies
Just to update, my sysadmin was able to successfully install cactus. I think you were right, it was those dependencies which I didn't have permission to install. Thank you for all your help with this!
Hi,
I'm trying to use the minigraph-cactus pipeline to build a pangenome graph using 21 fasta files (1 haploid genome + 10 diploid genomes) all of the same species. I'm running the program on my institutions HPC (slurm), so I don't have the option of using docker.
I'm running the following command:
cactus-minigraph --binariesMode singularity --batchSystem slurm ./jobstore arenosa_seqfile.txt arenosa.sv.gfa.gz --reference Aare
and the job is failing with the following:
`[2023-01-10T10:03:28+0000] [MainThread] [I] [toil.statsAndLogging] Enabling realtime logging in Toil [2023-01-10T10:03:28+0000] [MainThread] [I] [toil.statsAndLogging] Cactus Command: /gpfs01/home/mbzec/cactus/cactus_env/bin/cactus-minigraph --binariesMode singularity --batchSystem slurm ./jobstore arenosa_seqfile.txt arenosa.sv.gfa.gz --reference Aare [2023-01-10T10:03:28+0000] [MainThread] [I] [toil.statsAndLogging] Cactus Commit: 47f9079cc31a5533ffb76f038480fdec1b6f7c4f Error for command "pull": unknown flag: --size
Options for pull command:
-F, --force overwrite an image file if it exists -h, --help help for pull --library string download images from the provided library --no-cleanup do NOT clean up bundle after failed build, can be helpful for debugging --nohttps do NOT use HTTPS with the docker:// transport (useful for local docker registries without a certificate)
Run 'singularity --help' for more detailed usage information. INFO: Using cached SIF image [2023-01-10T10:03:31+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ref/01_arenosa/Arabidopsis_arenosa_genome.softmasked.fna [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/alt_Haps/SUB_3185_alt_all/SUB_3185_alt_all.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/SUB_3185.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/VEL_3171.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/alt_Haps/VEL_3171_alt_all/VEL_3171_alt_all.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/KAM_3176.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/BAL_3189.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/alt_Haps/KAM_3176_alt_all/KAM_3176_alt_all.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/alt_Haps/BAL_3189_alt_all/BAL_3189_alt_all.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/VLA_3164.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/alt_Haps/VLA_3164_alt_all/VLA_3164_alt_all.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/GUL_3169.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/alt_Haps/GUL_3169_alt_all/GUL_3169_alt_all.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/BUD_3161.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/alt_Haps/BUD_3161_alt_all/BUD_3161_alt_all.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/ING_3178.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/ZID_3157.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/BDO_3180.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/alt_Haps/ZID_3157_alt_all/ZID_3157_alt_all.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/alt_Haps/BDO_3180_alt_all/BDO_3180_alt_all.fa [2023-01-10T10:03:35+0000] [MainThread] [I] [toil.statsAndLogging] Importing file:///gpfs01/home/mbzec/arenosa/ON_data/denovo_assembly/working_assemblies/alt_Haps/ING_3178_alt_all/ING_3178_alt_all.fa [2023-01-10T10:03:36+0000] [MainThread] [W] [toil.common] Batch system does not support auto-deployment. The user script ModuleDescriptor(dirPath='/gpfs01/home/mbzec/cactus/cactus_env/lib/python3.10/site-packages', name='cactus.refmap.cactus_minigraph', fromVirtualEnv=True) will have to be present at the same location on every worker. [2023-01-10T10:03:36+0000] [MainThread] [I] [toil.job] Saving graph of 1 jobs, 1 new [2023-01-10T10:03:36+0000] [MainThread] [I] [toil.job] Processing job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v0 [2023-01-10T10:03:36+0000] [MainThread] [I] [toil] Running Toil version 5.8.0-79792b70098c4c18d1d2c2832b72085893f878d1 on host hmem001.int.augusta.nottingham.ac.uk. [2023-01-10T10:03:36+0000] [MainThread] [I] [toil.realtimeLogger] Starting real-time logging. [2023-01-10T10:03:36+0000] [MainThread] [I] [toil.leader] Issued job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v1 with job batch system ID: 0 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [], preemptable: False [2023-01-10T10:03:38+0000] [MainThread] [I] [toil.leader] 0 jobs are running, 1 jobs are issued and waiting to run [2023-01-10T10:04:12+0000] [MainThread] [W] [toil.leader] Job failed with exit value 127: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v1 Exit reason: None [2023-01-10T10:04:12+0000] [MainThread] [W] [toil.leader] No log file is present, despite job failing: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v1 [2023-01-10T10:04:12+0000] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v1 with ID kind-minigraph_construct_workflow/instance-x1lwuzeq to 5 [2023-01-10T10:04:13+0000] [MainThread] [I] [toil.leader] Issued job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v2 with job batch system ID: 1 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [], preemptable: False [2023-01-10T10:04:48+0000] [MainThread] [W] [toil.leader] Job failed with exit value 127: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v2 Exit reason: None [2023-01-10T10:04:48+0000] [MainThread] [W] [toil.leader] No log file is present, despite job failing: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v2 [2023-01-10T10:04:48+0000] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v2 with ID kind-minigraph_construct_workflow/instance-x1lwuzeq to 4 [2023-01-10T10:04:49+0000] [MainThread] [I] [toil.leader] Issued job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v3 with job batch system ID: 2 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [], preemptable: False [2023-01-10T10:05:24+0000] [MainThread] [W] [toil.leader] Job failed with exit value 127: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v3 Exit reason: None [2023-01-10T10:05:24+0000] [MainThread] [W] [toil.leader] No log file is present, despite job failing: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v3 [2023-01-10T10:05:24+0000] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v3 with ID kind-minigraph_construct_workflow/instance-x1lwuzeq to 3 [2023-01-10T10:05:25+0000] [MainThread] [I] [toil.leader] Issued job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v4 with job batch system ID: 3 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [], preemptable: False [2023-01-10T10:06:01+0000] [MainThread] [W] [toil.leader] Job failed with exit value 127: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v4 Exit reason: None [2023-01-10T10:06:01+0000] [MainThread] [W] [toil.leader] No log file is present, despite job failing: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v4 [2023-01-10T10:06:01+0000] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v4 with ID kind-minigraph_construct_workflow/instance-x1lwuzeq to 2 [2023-01-10T10:06:01+0000] [MainThread] [I] [toil.leader] Issued job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v5 with job batch system ID: 4 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [], preemptable: False [2023-01-10T10:06:37+0000] [MainThread] [W] [toil.leader] Job failed with exit value 127: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v5 Exit reason: None [2023-01-10T10:06:37+0000] [MainThread] [W] [toil.leader] No log file is present, despite job failing: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v5 [2023-01-10T10:06:37+0000] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v5 with ID kind-minigraph_construct_workflow/instance-x1lwuzeq to 1 [2023-01-10T10:06:37+0000] [MainThread] [I] [toil.leader] Issued job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v6 with job batch system ID: 5 and disk: 2.0 Gi, memory: 2.0 Gi, cores: 1, accelerators: [], preemptable: False [2023-01-10T10:08:25+0000] [MainThread] [W] [toil.leader] Job failed with exit value 127: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v6 Exit reason: None [2023-01-10T10:08:25+0000] [MainThread] [W] [toil.leader] No log file is present, despite job failing: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v6 [2023-01-10T10:08:25+0000] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v6 with ID kind-minigraph_construct_workflow/instance-x1lwuzeq to 0 [2023-01-10T10:08:25+0000] [MainThread] [W] [toil.leader] Job 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v7 is completely failed [2023-01-10T10:08:30+0000] [MainThread] [I] [toil.leader] Finished toil run with 1 failed jobs. [2023-01-10T10:08:30+0000] [MainThread] [I] [toil.leader] Failed jobs at end of the run: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v7 [2023-01-10T10:08:30+0000] [MainThread] [I] [toil.realtimeLogger] Stopping real-time logging server. [2023-01-10T10:08:30+0000] [MainThread] [I] [toil.realtimeLogger] Joining real-time logging server thread. Traceback (most recent call last): File "/gpfs01/home/mbzec/cactus/cactus_env/bin/cactus-minigraph", line 8, in
sys.exit(main())
File "/gpfs01/home/mbzec/cactus/cactus_env/lib/python3.10/site-packages/cactus/refmap/cactus_minigraph.py", line 122, in main
gfa_id = toil.start(Job.wrapJobFn(minigraph_construct_workflow, config_node, input_seq_id_map, input_seq_order, options.outputGFA))
File "/gpfs01/home/mbzec/cactus/cactus_env/lib/python3.10/site-packages/toil/common.py", line 1017, in start
return self._runMainLoop(rootJobDescription)
File "/gpfs01/home/mbzec/cactus/cactus_env/lib/python3.10/site-packages/toil/common.py", line 1461, in _runMainLoop
jobCache=self._jobCache).run()
File "/gpfs01/home/mbzec/cactus/cactus_env/lib/python3.10/site-packages/toil/leader.py", line 330, in run
raise FailedJobsException(self.jobStore, failed_jobs, exit_code=self.recommended_fail_exit_code)
toil.leader.FailedJobsException: The job store '/gpfs01/home/mbzec/arenosa/ON_data/graph_building/minigraph_cactus/build/jobstore' contains 1 failed jobs: 'minigraph_construct_workflow' kind-minigraph_construct_workflow/instance-x1lwuzeq v7
`
I'd be grateful for any pointers!