Closed Vad1mo closed 1 week ago
Hey @Vad1mo, thanks for getting in touch! And a great question too! Unfortunately, I don't have a great answer :(
Using a customised machine image is the safest step - you can provision default credentials at build-creation time, and use a post-provisioning tool like Ansible to update the credentials file in /etc/kubernetes/registry
whenever those credentials are rotated.
A less secure option would be to perform the setup on your device as part of your cloud-init or user-data scripts, however, this would expose the secrets you're placing into the registry
file I previously mentioned, hence some of the other issues I raised, like using one of #1 AWS Secrets Manager, #2 AWS KMS, #3 SOPS or #4 Vault to provide the secrets (to name but a few options!).
I've never come across Privileged DaemonSets before, so (from the little reading I've done), I can see that it could be a great idea, but I would be reluctant to make a change to the running kubelet while deploying a daemonset which will setup the required files ... if that's even possible!
Thank you for the insights Jon
I am thinking here in the context of Harbor. Where Harbor is often accessed across different k8s offerings. So, I would like to provide the user with an easy and cloud provider-agnostic way to install and use the binary on each node.
Maybe I need to see if cloud-init or user-data is widely supported across various k8s offerings.
The Kubelet Credential Provider is neat if you are a cloud provider, how is instantiating nodes that already contain the binary
/usr/local/bin/image-credential-provider
Imagining the wild west k8s manged offerings out there, where it is not always possible to copy a file to each node.
What would be the user friendliest option to install the binary on each node?
So far, the only option that comes into me mind are "Privileged" DaemonSets that installs a binary file on each node.
Are there any other option?