stackhpc / slurm-k8s-cluster

A Slurm cluster for Kubernetes
MIT License
36 stars 12 forks source link

Auto define slurmd memory and CPU configuration #27

Closed sjpb closed 1 year ago

sjpb commented 1 year ago

This PR enables the CPU and memory configuration of a node to be automatically defined on slurmd pod startup, rather than having to be modified in slurm.conf.

Some background

Non-cloud/non-autoscaling Slurm daemons can be started in two modes:

Dynamic Normal nodes do not need to be defined before slurmd startup. They automatically pass their actual memory and CPU configuration to the slurmctld on startup. However, the slurm.conf SlurmctldParameters=cloud_reg_addrs setting cannot be used, which with stable pod hostnames (= slurmd NodeNames), if a pod update changes the IP for a particular pod/slurm node, means slurmctld loses communication as the IP does not get updated.

Dynamic Future nodes do need to be defined before slurmd startup (with State=FUTURE), but cloud_reg_addrs works. This is the approach used in both the current main branch and this PR. In the main branch, the nodes are defined in slurm.conf. However, the default node definition results in 1x CPU and 1x MB memory, meaning these must be manually adjusted in slurm.conf to match the k8s worker node configurations.

This PR instead uses scontrol create node to create slurm nodes dynamically on pod startup with the memory/cpu/etc config of the actual node. It was expected that this would mean no NodeName= definitions would be required in slurm.conf. However, it appears slurmd segfaults on startup in that configuration. Therefore, a default configuration is left in slurm.conf, and on pod startup the node definition is deleted and recreated with the actual pod configuration.

Testing

This can be tested by launching a job which requires more than 1x cpu (in Slurm terms) per node, e.g. (on login node as rocky in ~):

srun  -N1 --ntasks-per-node=2 /usr/lib64/openmpi/bin/mpitests-IMB-MPI1 pingpong
sjpb commented 1 year ago

@sd109 I tested this; after a clean install ran a job ok, changed the slurmd args to -vv and upgrade (to force slurmd deletion/recreation), ran a job ok, changed again, ran a job ok.

sjpb commented 1 year ago

Closed, as https://github.com/stackhpc/slurm-k8s-cluster/pull/35 appears to work