Open jahkeup opened 5 years ago
Support for static node labels was added in bottlerocket-os/bottlerocket#366 so this should be possible to do with the API or another on-box process.
It might also be feasible to have a job scheduled onto new nodes to "query" it for the updater inferface and set the appropriate label (#4). This might be a tad unusual though and needs further thought + investigation :thinking:
The nodegroup can have labels applied to instances automatically to avoid needing to do it manually.
example eksctl config
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: bottlerocket2
region: us-west-2
version: '1.17'
nodeGroups:
- name: ng-bottlerocket2
labels: { bottlerocket.aws/updater-interface-version: 2.0.0 }
instanceType: m5.large
desiredCapacity: 3
amiFamily: Bottlerocket
The nodegroup can have labels applied to instances automatically to avoid needing to do it manually.
I personally use the above method as well for short lived clusters! It's very handy.
There's a drawback to doing it this way: when an interface version bump is needed, you'd need to replace your nodes (after updating the template's userdata) or update settings
via the API on each node. That said, the per-nodegroup labels works well and is convenient.
This issue is focused on the host "advertising" the correct interface version that's appropriate rather than globally setting this value (by way of a nodegroup-wide label). We'd need to propagate this data from the OS and add it to the kubelet's configured labels. The bottlerocket image would have the interface version label value built in and would be correct for any given build. This would eliminate the need to hand edit or otherwise update your nodes' updater-interface-version
altogether!
I'm not sure there's a great way to add labels to only bottlerocket nodes from the controller's perspective. Bottlerocket doesn't really expose any metadata to the kubernetes API that we could reliably use to determine if a node is a bottlerocket node or something else.
For example, on one of my bottlerocket nodes:
❯ k describe nodes ip-192-168-141-233.us-west-2.compute.internal | rg bottle
Container Runtime Version: containerd://1.6.6+bottlerocket
the only metadata that is remotely bottlerocket related is a +bottlerocket
buildtime version flag on containerd. And I don't think relying on containerd
's version is a great idea since in the future, users may build bottlerocket with a different container runtime.
We'd need to propagate this data from the OS and add it to the kubelet's configured labels. The bottlerocket image would have the interface version label value built in and would be correct for any given build.
I think this is an interesting solution and wouldn't require too much work on the bottlerocket side. On building the kubernetes variants, we'd need to set the label in the kublet build.
wouldn't require too much work on the bottlerocket side
It looks like there isn't a lot to work with. I wonder if providerID
is the right place to identify ourselves as a Bottlerocket node. https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration
What I'd like:
The update operator should automatically be eligible for scheduling on to Bottlerocket hosts in a Kubernetes cluster.
The suggested deployment uses a label to identify Bottlerocket hosts and schedule on them (ie: the
bottlerocket.aws/platform-version
label. Name may change: #4). Instead of requiring the label to be set by administrators, the label could be set (or determined) automatically for Bottlerocket nodes to eliminate the manual step.