Closed raphaelpoumarede closed 7 months ago
FYI I ran the command below to override the "reboot for all nodes" behavior.
sed -i '/- hosts: all/c\- hosts: k8s' ~/viya4-iac-k8s/playbooks/kubernetes-install.yaml
and with the following topology, I was able to run the tool from the jumphost without being disconnected/interrupted.
For information, I tested the change several times with success. If you agree that the Jump and NFS hosts don't need to be rebooted, could you please consider this change for a future release ? Thanks !
This requires PM review
In one of the playbook (roles/kubernetes/common/tasks/main.yaml) there is a task to reboot all the nodes, in order to force the OS to pick up some low level system changes (like the grub configuration change).
But if you were using the jumphost provisioned machine as your ansible controller, then you get automatically disconnected in the middle of the playbook execution (since ansible force the machine from where your run the the playbook to reboot). And the rest of the playbook tasks can not be executed.
The ansible.reboot should only happen on the K8s nodes and K8s control plan nodes, I don't see why NFS or Jump server nodes should be rebooted. If we don't reboot the NFS and jump servers, it leaves the possibility to co-locate them on a node and use this node as the ansible controller to run the tool (instead of having to provide yet another "bastion" machine - outside of the provisionned hosts - to run the playbook).