Open Xavina opened 5 months ago
I've managed to do that, it is about adding a Kubernetes label at 'Node Group' configuration level:
Or adding the 'labels' property in the template used to configure the managed node groups in the '01_create_eks_cluster.sh script':
####################################################
EKSCTL ClusterConfig generation
####################################################
function generate_cluster_config() {
local lt_id=$1
cat<<EOF
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: $CONFIG_EKS_CLUSTER_NAME
region: $CONFIG_REGION
version: "$CONFIG_K8S_VERSION"
managedNodeGroups:
- name: $CONFIG_EKS_WORKER_NODE_NAME
labels: {aws-nitro-enclaves-k8s-dp: enabled}
launchTemplate:
id: $lt_id
version: "1"
desiredCapacity: $CONFIG_EKS_WORKER_NODE_CAPACITY
EOF
}
And then we can remove the labeling code in the '02_enable_device_plugin.sh'.
Thanks!
Hi,
I need some guidance, I am pretty new to EKS / Kubernetes. I have used the tooling to create the cluster, node groups, a single node and deploying the pod, everything is working okay.
Now I am testing the scaling, in other words, adding more nodes to the node group, the Nodes in the Enclave Node Group must be labeled with:
"aws-nitro-enclaves-k8s-dp": "enabled"
But new nodes don't have the label set as the label is added only when running the tooling, I would like the label to be applied automatically when a new node joins the node group.
Any advise on how to do this?
Many thanks Xavi