-
In our HPC environment I am not alowed to run anything on the login-nodes (kicked out, banned...). So I have to run snakemake ALWAYS with `sbatch` or using an interactive session via `srun`.
Theref…
visze updated
5 hours ago
-
## Summary
It looks like BABS currently submits a separate job for each subject/session, but an array job would be nice. Proposed by @mattcieslak.
tsalo updated
2 weeks ago
-
A confluence of two things:
- Øystein mentioned during the meeting in Åre that he thought a Slurm job ID might not always be an integer - it might be `nn.m` where `m` is the step, for a job `nn` wi…
-
`sbatch` has a handy `--comment` that allows adding metadata to jobs:
```sh
sbatch -h | grep comment
--comment=name arbitrary comment
```
i'd like to use this to connect my slu…
-
When using a local executor the running logs appear right away, in the console it was launched from. But when using slurm one has to fish for the log files.
This can be made easier by automatically…
-
If I have more tasks than 1k, datatrove splits it into multiple job arrays 1k-each.
the first job array of 1k runs fine, the subsequent ones all fail
```
0: 2024-07-04 23:59:34.496 | ERROR |…
-
I have an issue where multiple highly-parallel `downloadcmd` jobs collide temporary files at the `~/NDA/nda-tools/downloadcmd/packages/${PACKAGE_ID}/.download-progress` folder. I'm thinking this issue…
-
Not an issue but doc clarification related to https://intertwin-eu.github.io/interLink/docs/tutorial-admins/deploy-interlink#requirements-1
Due to security and test reasons, I would like to put the…
-
We notice that in the `Jobs` -> `Active jobs` tab there are duplicate jobs per cluster as both have the same slurm configuration and slurm is configured with a single cluster:
```sh
$ _cpu1r
$ sa…
-
## Bug report
I am noticing a problem with the Nextflow-SLURM interface when processing large number of jobs. For example, I am running a workflow and have spawned ~10,000 tasks (50 samples * 150 …