-
The most commonly used job scheduler with Singularity or bioinformatics analysis jobs is Slurm. Here are its benefits:
- load balancing between multiple nodes
- limiting ram/process job size
- …
-
Add support for SLURM job scheduler.
bjpop updated
5 years ago
-
Can we run Dask Jobqueue outside the SLURM system (e.g. on SRC) and have workers submitted to SLURM? Dask jobqueue uses `sbatch`/`scancel` to manage jobs, can one can provide custom commands that invo…
-
`sbatch` has a handy `--comment` that allows adding metadata to jobs:
```sh
sbatch -h | grep comment
--comment=name arbitrary comment
```
i'd like to use this to connect my slu…
-
The job scheduler script for Slurm in the documentation uses a for loop to submit jobs:
```
#!/bin/bash
for i in {1..{n_jobs}}
do
sbatch script_redis_worker.sh
done
```
However this is n…
-
Pav2 is the only test harness I've found that allows me to specify a number of nodes and execute all subsequent jobs on them (thank you). This is achieved as follows:
modes/share.yaml
```yaml
sch…
-
Dear Developer,
my name is Yun. I am asking this on behalf of a user who recently came to us for help with running this pipeline in our SCC cluster environment, which is using SGE rather than SLURM a…
-
Users were asking how SkyPilot should interact with slurm clusters.
We should think of how we should handle the case for slurm, i.e. whether to treat it as a job scheduler only or a way to start new …
-
### Your name
SU Yi
### Your affiliation
Fudan University
### What happened? What did you expect to happen?
The spin-up simulation almost stopped immediately after the log prints "* B e g i n T…
-
Proposal:
- At the end of every data segment, save the training checkpoint file/directory name to a file, informing the scheduler (e.g., slurm or lsf) where to find the checkpoint.
- Save the model …