Open woodshop2300 opened 3 years ago
I have experienced a similar behavior but this was related to my test setup.
If you have no workers running and you are starting a bunch of job execution in parallel then only one worker gets started and all jobs are waiting for the same executer.
I guess this behavior is related to Jenkins and how jobs are scheduled because when the jobs are not scheduled all at once then a new worker gets started if necessary.
I have been working my through getting this all up and running. I have things at the point that Jenkins can and does spawn my docker build slave when i queue a build. When the build is done, the slave is tore down after a time. So looks well and good.
I next added parameters to my Jenkins build so i could queue a bunch of those builds w/o Jenkins pruning them since they would otherwise be identical and queued up 20 builds.
My builds are very simple, its a alpine base docker with openjdk8 and my build command is
sleep 60
, so this is all pure testing, there isn't even version management checkouts involved in the process.What happens is the above log snippet, 1 build slave gets spawned, and it slowly churns through all 20 builds and gets destroyed. Based on the other issues here, NomadProvisioningStrategy.java, and my own expectations i would think mutiple build slaves should have been spawned.
Reading the log snippet, the Excess workload log line would suggest reading the queue depth is somehow broken or otherwise not returning that there is 19 other builds it needs to work though.
Jenkins Nomad plugin version 0.7.1
Jenkins Version 2.263.1