Closed sjpb closed 1 year ago
@sd109 I tested this; after a clean install ran a job ok, changed the slurmd args to -vv
and upgrade (to force slurmd deletion/recreation), ran a job ok, changed again, ran a job ok.
Closed, as https://github.com/stackhpc/slurm-k8s-cluster/pull/35 appears to work
This PR enables the CPU and memory configuration of a node to be automatically defined on slurmd pod startup, rather than having to be modified in slurm.conf.
Some background
Non-cloud/non-autoscaling Slurm daemons can be started in two modes:
slurmd -F
)slurmd -Z
)Dynamic Normal nodes do not need to be defined before slurmd startup. They automatically pass their actual memory and CPU configuration to the slurmctld on startup. However, the slurm.conf SlurmctldParameters=cloud_reg_addrs setting cannot be used, which with stable pod hostnames (= slurmd NodeNames), if a pod update changes the IP for a particular pod/slurm node, means slurmctld loses communication as the IP does not get updated.
Dynamic Future nodes do need to be defined before slurmd startup (with
State=FUTURE
), butcloud_reg_addrs
works. This is the approach used in both the currentmain
branch and this PR. In themain
branch, the nodes are defined in slurm.conf. However, the default node definition results in 1x CPU and 1x MB memory, meaning these must be manually adjusted in slurm.conf to match the k8s worker node configurations.This PR instead uses
scontrol create node
to create slurm nodes dynamically on pod startup with the memory/cpu/etc config of the actual node. It was expected that this would mean noNodeName=
definitions would be required in slurm.conf. However, it appearsslurmd
segfaults on startup in that configuration. Therefore, a default configuration is left in slurm.conf, and on pod startup the node definition is deleted and recreated with the actual pod configuration.Testing
This can be tested by launching a job which requires more than 1x cpu (in Slurm terms) per node, e.g. (on login node as rocky in
~
):