Closed yulingao closed 1 month ago
Hi, I did it,
c3:
image: slurm-docker-cluster:19.05.1
command: ["slurmd"]
hostname: c3
container_name: c3
volumes:
- etc_munge:/etc/munge
- etc_slurm:/etc/slurm
- slurm_jobdir:/data
- var_log_slurm:/var/log/slurm
expose:
- "6818"
depends_on:
- "slurmctld"
NodeName=c[1-2] RealMemory=1000 State=UNKNOWN
by
NodeName=c[1-3] RealMemory=1000 State=UNKNOWN
and
PartitionName=normal Default=yes Nodes=c[1-2] Priority=50 DefMemPerCPU=500 Shared=NO MaxNodes=1 MaxTime=5-00:00:00 DefaultTime=5-00:00:00 State=UP
by
PartitionName=normal Default=yes Nodes=c[1-3] Priority=50 DefMemPerCPU=500 Shared=NO MaxNodes=1 MaxTime=5-00:00:00 DefaultTime=5-00:00:00 State=UP
docker-compose stop docker build -t slurm-docker-cluster:19.05.1 .
If you rebuild it, it will change nothing, since no change in Dockerfile was made. Thus, you need to slightly modify the Dockerfile (by adding e.g. RUN echo 'test') just before the COPY of slurm.conf.
You may be to run docker-compose build
or if there is caching, you can add the --no-cache
to that command.
You may be to run
docker-compose build
or if there is caching, you can add the--no-cache
to that command.
This is not working since the build is happening through docker
and not docker-compose
. This could work if in docker-compose yml file build instruction is set instead of image.
I believe there was a fork of this project that very neatly was able to dynamically add multiple nodes. If i come across it again, i'll post it here. If folks find it useful, perhaps it could get contributed here as well.
i have tried a lot of times, but i still have a 2 nodes slurm cluster