Closed byndcivilization closed 1 year ago
Is your provisioner exactly as written above? Since it looks like you have invalid YAML with two colons on L5 in the provisioner you have copied. The other thing is that the name that you have pasted from the Provisioner default-private-workers
is different from the one the provisioning loop in Karpenter is launching with default-private-worker
.
Can you grab the default-private-worker
provisioner and see what your requirements are there?
oof yup. that extra colon was the issue. the name mismatch was a fat finger when trying to pull out terraform templating into something readable. im kind of shocked the tf kubectl provider let me apply that.
Version
Karpenter Version: v0.26.1
Kubernetes Version: v1.25.6-eks-48e63af
Expected Behavior
should spin up a micro, small, mediam or large instance in the t3 family
Actual Behavior
spins up some flavor of c6a
Steps to Reproduce the Problem
I'm running a relatively fresh cluster and trying to install karpenter as the scaling engine. I have 1 default eks managed nodegroup for static resources like the karpenter controllers. Im trying to spin up a default karpenter provisioner in private subnets for workloads. I'm provisioning all of this in terraform and applying manifests using the kubectl provider v1.14. The manifests being applied are below. When I scale up the inflator demonstration deployment, karpenter does provision the node but its always a c6a. Subnets and security groups are being respected.
Provisioner
Node template
Resource Specs and Logs
Karpenter logs
Community Note