Closed percyfal closed 2 years ago
Yes, I can reproduce in 21.08, but 20.11.8 appears to be working. Something related to networking changed in Slurm between those releases and the slurm.conf
probably is missing something to accommodate. I saw errors about NodeHostName and NodeAddr being the same for all 4 nodes.
Help is appreciated 😄
So I actually found a solution; removing redundant NodeHostName statements in slurm.conf seems to work:
NodeName=c1 NodeHostName=slurmctl NodeAddr=127.0.0.1 RealMemory=1000
NodeName=c2 NodeAddr=127.0.0.1 RealMemory=1000
NodeName=c3 NodeAddr=127.0.0.1 RealMemory=1000 Gres=gpu:titanxp:1
NodeName=c4 NodeAddr=127.0.0.1 RealMemory=1000 Gres=gpu:titanxp:1
Not that I could find anything in the docs that would suggest this solution; I basically reacted to this error in the logs:
cat /var/log/slurm/slurmctld.log | grep Duplicated
[2022-02-17T08:33:22.239] error: Duplicated NodeHostName slurmctl in the config file
[2022-02-17T08:33:22.239] error: Duplicated NodeHostName slurmctl in the config file
[2022-02-17T08:33:22.239] error: Duplicated NodeHostName slurmctl in the config file
I'm currently running the test on my fork (see changeset). I'll submit a pr on success.
Fixed in #38
Hi,
I'm using docker-centos7-slurm to test a workflow manager. It has been a while since updating, but when trying out the most recent version, I notice that only one node (
c1
) is up in the container. I am currently testing this in my fork (see pr #1). Briefly, I parametrized test_job_can_run to pass partition to the--partition
option. Thenormal
partition works as expected, butdebug
fails.If one enters the latest image with
running
sinfo
yieldsSee github action resuts, where I added some print statements to see what was going on (nevermind that the test actually passed; I was simply looking at the erroneous slurm output file). I consistently get the feedback that the required nodes are not available; it would seem node
c1
is the only node available to sbatch.Are you able to reproduce this?
Cheers,
Per