Open dmitry-mightydevops opened 1 month ago
Maybe duplicate with https://github.com/kubernetes-sigs/karpenter/issues/1645
Responded here but this is expected with spot-to-spot consolidation. The CloudProvider takes in all of the spot possibilities on initial launch and is going to give you back the instance type that it thinks is optimal at that point in time. If you are using AWS, this looks like CreateFleet determining that this is the instance type that has the best cost/availability combination and therefore they launch that instance type for you. That may not be the cheapest instance type -- which is what you are seeing here. In this case, you're seeing, though, that it's close to the bottom.
Fleet didn't choose those bottom 6 instance types because they most likely had a much higher chance of being interrupted than the instance type that they placed you in.
/triage accepted
Description
Observed Behavior:
Karpented scaled nodes as a result of KEDA/HPA commands. Finally it resulted in a single beefy with no load and throws this Unconsolidatable reason.
This is my node: karpenter.k8s.aws/instance-size=2xlarge
Expected Behavior:
Node to be replaced with cheaper one.
Reproduction Steps (Please include YAML):
class:
➜ kg ec2nodeclass.karpenter.k8s.aws/celery-worker-import-export -o yaml | k neat
pool:
➜ kg nodepool celery-worker-import-export -o yaml | k neat
when I deleted the node:
I got the following in my karpenter logs:
so it allocated a proper node t3.xlarge instead of t3.2xlarge I had before.
Versions:
kubectl version
): aws eks 1.30