elastisys / compliantkubernetes-kubespray

Apache License 2.0
28 stars 7 forks source link

Missing docs/venv for apply-ssh #77

Open lentzi90 opened 3 years ago

lentzi90 commented 3 years ago

Describe the bug The apply-ssh command should be used to manage authorized SSH keys but it doesn't work out of the box. There is no venv for it (like for the apply command), so you must have ansible installed locally. It also requires the ansible.posix.authorized_key role which is not included in ansible by default. Finally, since it executes ansible from the playbooks folder instead of from kubespray, it doesn't use the ansible.cfg that comes with kubespray. This means that host key checking is used which will usually not work without first adding the fingerprints for all Nodes.

To Reproduce Steps to reproduce the behavior:

  1. Create a new cluster using compliantkubernetes-kubespray
  2. Make sure you have some authorized keys listed in group_vars/all/ck8s-ssh-keys.yaml under ck8s_ssh_pub_keys_list
  3. Run ./bin/ck8s-kubespray apply-ssh <prefix>
  4. See error

Expected behavior Apply ssh should make sure all dependencies are installed OR the required steps should be clearly documented. Additionally, the same (or a similar) ansible.cfg should be used as for kubespray. I think it makes sense to use the same config since we are interacting with the same cluster.

Screenshots If applicable, add screenshots to help explain your problem.

Additional context We may not always want a venv. See this PR. Make it optional.

tordsson commented 3 years ago

Add missing prereqs into ck8s-devbox instead to achieve this. Installing pre-reqs in many different tools may bite back in the long term.

simonklb commented 3 years ago

I would be nice to know the reasoning behind the apply-ssh command. If I was going to update SSH keys on the hosts I would use the already existing solution used to initially inject the keys. With the extra playbook in this repository we will have two sources of truth of which keys should be on the hosts.

For example, in the case of Exoscale I would add the extra SSH keys in the Terraform config, apply and rerun cloud-init.

@Ajarmar could you give some more info here?

OlleLarsson commented 3 years ago

Doesn't a change in cloud-init force recreation of the vm?

OlleLarsson commented 3 years ago

Also some tf modules only allow to you to specify a single key (i haven't looked at exoscale to see what magic it does) https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/terraform/openstack/variables.tf#L81

simonklb commented 3 years ago

Doesn't a change in cloud-init force recreation of the vm?

Nope! cloud-init clean && cloud-init init There are probably more ways of specifically run the tasks required rather than the whole cloud-init procedure as well.

Also some tf modules only allow to you to specify a single key (i haven't looked at exoscale to see what magic it does) https://github.com/kubernetes-sigs/kubespray/blob/master/contrib/terraform/openstack/variables.tf#L81

Good catch! One need to be aware that new machines added will always require our special SSH playbook on top of Kubespray in that case.

As this is not crucial let's leave it as is. But in the future we should probably push for proper support upstream.

Xartos commented 3 years ago

Nope! cloud-init clean && cloud-init init There are probably more ways of specifically run the tasks required rather than the whole cloud-init procedure as well.

Is there any terraform modules that implements this? I don't think I've ever seen any module that doesn't recreate the VM when changing user-data.

simonklb commented 3 years ago

Nope! cloud-init clean && cloud-init init There are probably more ways of specifically run the tasks required rather than the whole cloud-init procedure as well.

Is there any terraform modules that implements this? I don't think I've ever seen any module that doesn't recreate the VM when changing user-data.

~I just did this on Exoscale which just updated the user-data in-place. Not sure about the rest.~

Nvm, update in-place did not mean not rebooting. :sweat: Definitely more user-friendly updating the SSH keys without rebooting then. Just need to be aware of new machines requires the extra SSH playbook as well.

cristiklein commented 3 years ago

I would be nice to know the reasoning behind the apply-ssh command. If I was going to update SSH keys on the hosts I would use the already existing solution used to initially inject the keys. With the extra playbook in this repository we will have two sources of truth of which keys should be on the hosts.

For example, in the case of Exoscale I would add the extra SSH keys in the Terraform config, apply and rerun cloud-init.

Please find here the rationale on why we chose to manage SSH keys from Ansible and not Terraform: https://compliantkubernetes.io/adr/0005-use-individual-ssh-keys/#decision-outcome