Closed RafaelMoreira1180778 closed 2 years ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/close
@RafaelMoreira1180778: Closing this issue.
@RafaelMoreira1180778 were you able to resolve this issue?
At the end, how did you manage to solve this issue?
Which component are you using?: Cluster-Autoscaler
What version of the component are you using?:
k8s.gcr.io/autoscaling/cluster-autoscaler:v1.23.0
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?: AWS EC2 with MicroK8s
What did you expect to happen?:
The Cluster-Autoscaler fetch my current ASG status and adjust the desired state of the ASG accordingly to the needs of the cluster. I also expect for the CA to describe the tags and understand that the ASG have the correct tags.
What happened instead?:
Got the error
Node should not be processed by cluster autoscaler (no node group config)
:Got the error:
How to reproduce it (as minimally and precisely as possible):
Manifest Deployed:
AWS EC2 Instance IAM permissions:
CA Parameters reported by the pod logs:
Anything else we need to know?:
1 Master Node not on any ASG. Cluster is formed by ASG EC2 instances.
The nodes inside the Kubernetes cluster have the correct providerID set.
The EC2 instance where the CA is runnning (master node) can perform
aws sts get-caller-identity
and perform all the necessary commands to retrieve the resource tags needed.The ASG tags are the following (they are set to propagate on launch so these are also the tags of each EC2 inside the cluster, except for the master node):