[ec2-user@ip-192-168-100-253 ec2-spot-eks-solution]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-101-67.eu-central-1.compute.internal Ready <none> 13m v1.13.8-eks-cd3eb0
ip-192-168-103-103.eu-central-1.compute.internal Ready <none> 13m v1.13.8-eks-cd3eb0
ip-192-168-103-70.eu-central-1.compute.internal Ready <none> 13m v1.13.8-eks-cd3eb0
While using cluster auto-scaler, i am getting below errors.
E0828 16:27:42.353452 1 static_autoscaler.go:168] Failed to update node registry: Unable to get first autoscaling.Group for [REDACTED]
I0828 16:27:42.895068 1 leaderelection.go:199] successfully renewed lease kube-system/cluster-autoscaler
I0828 16:27:44.905532 1 leaderelection.go:199] successfully renewed lease kube-system/cluster-autoscaler
I0828 16:27:46.915096 1 leaderelection.go:199] successfully renewed lease kube-system/cluster-autoscaler
I0828 16:27:48.924381 1 leaderelection.go:199] successfully renewed lease kube-system/cluster-autoscaler
I0828 16:27:50.934511 1 leaderelection.go:199] successfully renewed lease kube-system/cluster-autoscaler
I0828 16:27:52.353611 1 static_autoscaler.go:114] Starting main loop
E0828 16:27:52.450797 1 static_autoscaler.go:168] Failed to update node registry: Unable to get first autoscaling.Group for [REDACTED]
Here is cluster-autoscaler policy which is attached to NodeInstanceRole
Hello, I am new to EKS, I am following this link to create worker nodes where i am using combination of on demand as well as spot instances.
While using cluster auto-scaler, i am getting below errors.
Here is cluster-autoscaler policy which is attached to NodeInstanceRole
Am i missing something?