Closed RobertFloor closed 1 year ago
Very well reported as usual; I've not been able to reproduce the scenario in a molecule test on containers (yet...), but still releasing the bugfix as soon as it is merged. Can I ask the nfs4 mount parameters you have in use?
Hi,
My name is Ivo, im also a member of same team Robert is working for.
Here are our nfs4 mount parameters.
- name: Mount NFS volume
- become: true
- ansible.posix.mount:
- src: amqtest2.file.core.windows.net:/amqtest2/amqdata
- path: /data/amq-broker/shared
- opts: rw,sync,hard,intr
- state: mounted
- fstype: nfs
SUMMARY
We destroy our Azure VM in the evening an rebuild it in the morning. The AMQ Broker starts faster then the nfs mount we use. Therefore the broker does not become HA successfully since it got a lock on the directory before it was mounted by nfs. This results in two active brokers. A solution could be the approach described here making the amq-broker service dependent on the nfs mount.: https://unix.stackexchange.com/questions/246935/set-systemd-service-to-execute-after-fstab-mount. For that we would need to add an additional variable in the systemd template. Would it be possible to implement this in the code?
ISSUE TYPE
ANSIBLE VERSION
COLLECTION VERSION
STEPS TO REPRODUCE
EXPECTED RESULTS
One of the two brokers should become active the other one should remain passive. Therefore the amq-broker should start after the NFS mount.
ACTUAL RESULTS