Nomad is an easy-to-use, flexible, and performant workload orchestrator that can deploy a mix of microservice, batch, containerized, and non-containerized applications. Nomad is easy to operate and scale and has native Consul and Vault integrations.
I am running 3 Nomad nodes in client mode, each running a RabbitMQ container with static ports. For my test (and probably for production as well), my max_parallel (stanza update) is equal to count (stanza group).
When I update the job file, then apply it, Nomad try to start 3 new containers before stopping old RabbitMQ instances. Unfortunately, because I have only 3 nodes, ports are already busy: new containers cannot start, but old instances are killed anyway.
Could you implement a retry or manage this special case ? It would be pity to have to start 3 more vm, only for rolling-upgrade !
Hi guys,
I just found an abnormal behaviour:
I am running 3 Nomad nodes in client mode, each running a RabbitMQ container with static ports. For my test (and probably for production as well), my
max_parallel
(stanzaupdate
) is equal tocount
(stanzagroup
).When I update the job file, then apply it, Nomad try to start 3 new containers before stopping old RabbitMQ instances. Unfortunately, because I have only 3 nodes, ports are already busy: new containers cannot start, but old instances are killed anyway.
Could you implement a retry or manage this special case ? It would be pity to have to start 3 more vm, only for rolling-upgrade !