Closed sunta3iouxos closed 5 months ago
I got word from my HPC people to:
use the local job RAMDISK for temporary files. It's sufficient to enlarge the per-core-memory settings (cluster.memory) to compensate for the additional memory usage. For example, in the slurm job template of snakepipe: tmpdir="/dev/shm/${SLURM_JOB_USER}.${SLURM_JOB_ID}" Those environment variables will be set in the job environment and not before. I don't remember if you can use the above setting directly in snakepipes or if the dollar signs have to be escaped. You'll have to test it..
How am I setting this properly?
Hi,
you can configure your snakePipes installation to use /dev/shm
by snakePipes config --tempDir /dev/shm
. Snakepipes will then create temporary folders on that volume using random alphanumeric strings appended to "snakepipes". The name of the random temporary folder is determined before and passed to the main snakemake process, so before any jobs are submitted to slurm.
Using of slurm 'user' and 'job ID' variables will not work in this case.
This setup is meant for execution of the main process on a login node, which will then submit rule-based jobs to the cluster.
Is that how you run snakePipes ? Or do you submit your main process as a cluster job as well?
Hi there, While running the pipeline I get the following:
Shouldnt the tmpdir be ás dictated in the defaults.yalm?
Also I have set the glabal TMPDIR environment as:
When I am running the script with the --local option the tmp directory is properly recognised