Closed tothuhien closed 2 years ago
The slurm.usegalaxy.no
node has 20 cores (196 GB memory), nrec2.usegalaxy.no
has 32 cores (125 GB memory) and dynamic ECC nodes (eccN.usegalaxy.no
) each have 32 cores (250 GB memory). You can specify the minimum CPU and memory requirements for individual tools in the tool_destinations.yaml configuration file. Slurm will then run the job on an appropriate node when resources become available.
thanks @kjetilkl for the information!
there're 2 requirement packages: snakemake>=6.0 and mamba, but it failed to build the container. Snakemake alone works. I don't know why, it works on my local test one. Anyone has any idea?
I don't know either why Galaxy was unable to build the container, but I tried to build it manually with Planemo (following the instructions in our FAQ). First, I did it at test.usegalaxy.no
but that failed since it ran out of memory, so then I had to build it on a different server instead. I made a Singularity image from the Docker image and copied it into the container directory on usegalaxy.no
. The image is named "stainedglass.sif" but I created a symlink to this file called mulled-v2-84dd685daec01e9ca9c3029b89f189838f5858e5:3b476f2bf8af2977e4680a3c4c8e112454c32a7b-0
, since this is the filename that Galaxy expects based on the requirements in the tool wrapper. (If you change the requirements, we will have to do this again.)
P.S.:
I'm not sure if you have to restart Galaxy if you make changes to the /srv/galaxy/dynamic_rules/usegalaxy/tool_destinations.yaml
configuration file mentioned above, but you probably do.
excellent, thanks @kjetilkl and thanks for the very good documentation as well. I haven't made change to the tool_destinations.yaml file, will do that when the tool works.
In the file /srv/galaxy/dynamic_rules/usegalaxy/sorting_hat.py
the function get_tool_id
seems to be defined for tools from toolshed.g2.bx.psu.edu
only (where the tool_id contains 5 '/', for example toolshed.g2.bx.psu.edu/repos/devteam/bwa/bwa_mem/0.7.12.1
). For this tool on galaxy-ntnu.bioinfo.no
, the tool_id is galaxy-ntnu.bioinfo.no/toolshed_nels/repos/hien/stainedglass/stained_glass/v0.4
which contains 6 '/'. So maybe that's why the core and mem specified in the file tool_destinations.yaml
for this tool was not called properly.
Can you fix that? Thank you.
I have added another if-test to get_tool_id
to process IDs with 6 forward slashes. Can you check if it works now?
working now. Thanks for your help!
Great! Excellent catch @tothuhien! I will include it in the playbook also.
There's a request in my lab to implement this tool: https://github.com/mrvollger/StainedGlass I wrote a simple wrapper for it, but was wondering about the number of cores to run the tool. In the example they use 24 cores, but can we do that on our server? How many cores should we use?