Open lentzi90 opened 3 years ago
Add missing prereqs into ck8s-devbox instead to achieve this. Installing pre-reqs in many different tools may bite back in the long term.
I would be nice to know the reasoning behind the apply-ssh
command. If I was going to update SSH keys on the hosts I would use the already existing solution used to initially inject the keys. With the extra playbook in this repository we will have two sources of truth of which keys should be on the hosts.
For example, in the case of Exoscale I would add the extra SSH keys in the Terraform config, apply and rerun cloud-init.
@Ajarmar could you give some more info here?
Doesn't a change in cloud-init force recreation of the vm?
Also some tf modules only allow to you to specify a single key (i haven't looked at exoscale to see what magic it does) https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/terraform/openstack/variables.tf#L81
Doesn't a change in cloud-init force recreation of the vm?
Nope! cloud-init clean && cloud-init init
There are probably more ways of specifically run the tasks required rather than the whole cloud-init procedure as well.
Also some tf modules only allow to you to specify a single key (i haven't looked at exoscale to see what magic it does) https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/terraform/openstack/variables.tf#L81
Good catch! One need to be aware that new machines added will always require our special SSH playbook on top of Kubespray in that case.
As this is not crucial let's leave it as is. But in the future we should probably push for proper support upstream.
Nope!
cloud-init clean && cloud-init init
There are probably more ways of specifically run the tasks required rather than the whole cloud-init procedure as well.
Is there any terraform modules that implements this? I don't think I've ever seen any module that doesn't recreate the VM when changing user-data.
Nope!
cloud-init clean && cloud-init init
There are probably more ways of specifically run the tasks required rather than the whole cloud-init procedure as well.Is there any terraform modules that implements this? I don't think I've ever seen any module that doesn't recreate the VM when changing user-data.
~I just did this on Exoscale which just updated the user-data in-place. Not sure about the rest.~
Nvm, update in-place did not mean not rebooting. :sweat: Definitely more user-friendly updating the SSH keys without rebooting then. Just need to be aware of new machines requires the extra SSH playbook as well.
I would be nice to know the reasoning behind the
apply-ssh
command. If I was going to update SSH keys on the hosts I would use the already existing solution used to initially inject the keys. With the extra playbook in this repository we will have two sources of truth of which keys should be on the hosts.For example, in the case of Exoscale I would add the extra SSH keys in the Terraform config, apply and rerun cloud-init.
Please find here the rationale on why we chose to manage SSH keys from Ansible and not Terraform: https://compliantkubernetes.io/adr/0005-use-individual-ssh-keys/#decision-outcome
Describe the bug The
apply-ssh
command should be used to manage authorized SSH keys but it doesn't work out of the box. There is no venv for it (like for the apply command), so you must have ansible installed locally. It also requires theansible.posix.authorized_key
role which is not included in ansible by default. Finally, since it executes ansible from theplaybooks
folder instead of fromkubespray
, it doesn't use theansible.cfg
that comes with kubespray. This means that host key checking is used which will usually not work without first adding the fingerprints for all Nodes.To Reproduce Steps to reproduce the behavior:
group_vars/all/ck8s-ssh-keys.yaml
underck8s_ssh_pub_keys_list
./bin/ck8s-kubespray apply-ssh <prefix>
Expected behavior Apply ssh should make sure all dependencies are installed OR the required steps should be clearly documented. Additionally, the same (or a similar)
ansible.cfg
should be used as for kubespray. I think it makes sense to use the same config since we are interacting with the same cluster.Screenshots If applicable, add screenshots to help explain your problem.
Additional context We may not always want a venv. See this PR. Make it optional.