Open pantaray opened 3 months ago
An additional bug emerged here as well: the job_directives_skip
removes any lines from the generated sbatch
that contain the specified string, e.g.,
cluster = SLURMCluster(cores=32, memory="4000MB", processes=4, queue="E880", job_cpu = 56, job_directives_skip=['--mem'])
print(cluster.job_script())
#!/usr/bin/env bash
#SBATCH -J dask-worker
#SBATCH -p E880
#SBATCH -n 1
#SBATCH --cpus-per-task=56
#SBATCH -t 00:30:00
cluster = SLURMCluster(cores=32, memory="4000MB", processes=4, queue="E880", job_cpu =56, job_directives_skip=['t'])
print(cluster.job_script())
#!/usr/bin/env bash
#SBATCH -J dask-worker
#SBATCH -p E880
#SBATCH -n 1
#SBATCH --mem=4G
Describe the problem Allocating a distributed computing client with custom CPU/mem settings in the E880 partition does not actually allocate the specified resources.
Steps To Reproduce
This produces
sbatch
scripts missing the CPU spec and thus using default core allocations:Additional Information Changing the underlying
SLURMCluster
call tofixes the problem:
A possible fix in
esi_cluster_setup
could be the following change of line L203