aws / karpenter-provider-aws

Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
https://karpenter.sh
Apache License 2.0
6.78k stars 954 forks source link

Karpenter on Fargate Profile Results in Failed Readiness/Liveness Checks #6637

Closed kgochenour closed 3 weeks ago

kgochenour commented 3 months ago

Description

Observed Behavior: When a Fargate Profile is created for our karpenter namespace, the pods are restarted to move from the Managed Node Group to Fargate. However, they always fail their liveness and readiness checks. But as soon as we remove the Fargate profile, and the pods go back to Managed Nodes, karpenter resumes working as expected.

Here is the pod event errors:

Events:
  Type     Reason          Age   From               Message
  ----     ------          ----  ----               -------
  Normal   LoggingEnabled  2m3s  fargate-scheduler  Successfully enabled logging for pod
  Normal   Scheduled       76s   fargate-scheduler  Successfully assigned karpenter/karpenter-9ff5b8df9-88f96 to fargate-ip-10-200-40-120.us-west-2.compute.internal
  Normal   Pulling         76s   kubelet            Pulling image "public.ecr.aws/karpenter/controller:0.37.0@sha256:157f478f5db1fe999f5e2d27badcc742bf51cc470508b3cebe78224d0947674f"
  Normal   Pulled          72s   kubelet            Successfully pulled image "public.ecr.aws/karpenter/controller:0.37.0@sha256:157f478f5db1fe999f5e2d27badcc742bf51cc470508b3cebe78224d0947674f" in 4.111s (4.111s including waiting). Image size: 46233780 bytes.
  Normal   Created         72s   kubelet            Created container controller
  Normal   Started         71s   kubelet            Started container controller
  Warning  Unhealthy       29s   kubelet            Readiness probe failed: Get "http://10.200.40.120:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy       2s    kubelet            Liveness probe failed: Get "http://10.200.40.120:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Karpenter also produces no logs during this time. When I set logLevel to DEBUG, this is the only log that is produced: {"level":"DEBUG","time":"2024-07-31T19:31:42.515Z","logger":"controller","message":"discovered karpenter version","commit":"490ef94","version":"0.37.0"}

Expected Behavior: Karpenter pods would run without issue on Fargate as they would on Managed Nodes.

Reproduction Steps (Please include YAML):

Have Karpenter running successfully on a managed node group in karpenter namespace. Using the below values for the Helm chart:

---
priorityClassName: "system-cluster-critical"

replicas: 3

logLevel: debug

settings:
  clusterName: <removed>
  clusterEndpoint: <removed>
  interruptionQueue: <removed>

serviceAccount:
  annotations:
    eks.amazonaws.com/role-arn: <removed>

podAnnotations:
  ad.datadoghq.com/controller.checks: |
    {
      "karpenter": {
        "init_config": {},
        "instances": [
          {
            "openmetrics_endpoint": "http://%%host%%:8000/metrics"
          }
        ]
      }
    }

controller:
  resources:
    limits:
      cpu: 1
      memory: 1Gi
    requests:
      cpu: 1
      memory: 1Gi

Create Fargate Profile for karpenter namespace:

selector {
          namespace = "karpenter"
}

Attach Fargate Profile IAM Role with the following perms:

Delete one pod that is currently successfully running in karpenter namespace on a Managed Node.

Recreated pod fails with error noted above.

Versions:

njtran commented 3 months ago

How are you configuring pod-level permissions? Are you using Pod Identity or IRSA?

This seems odd to me that you'd have different health/liveness checks on fargate vs managed nodes, and there are no logs for the fargate run? Can you make sure you're trying to grab the logs of the leader?

kgochenour commented 3 months ago

@njtran

This is using IRSA.

The health checks dont change, and the logs are interesting. I have been testing with a follower. but when i try all three replicas they are still sad.

Pod Describe from a working pod running on the managed node group

Name:                 karpenter-678bcd88c7-7s5j8
Namespace:            karpenter
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      karpenter
Node:                 <removed> (managed node group node)
Start Time:           Mon, 05 Aug 2024 15:01:00 -0400
Labels:               app.kubernetes.io/instance=karpenter
                      app.kubernetes.io/name=karpenter
                      pod-template-hash=678bcd88c7
Annotations:          ad.datadoghq.com/controller.checks:
                        {
                          "karpenter": {
                            "init_config": {},
                            "instances": [
                              {
                                "openmetrics_endpoint": "http://%%host%%:8000/metrics"
                              }
                            ]
                          }
                        }
Status:               Running
IP:                   <removed>
IPs:
  IP:           <removed>
Controlled By:  ReplicaSet/karpenter-678bcd88c7
Containers:
  controller:
    Container ID:    containerd://f82c7d6393d88eba18694ab88e3cac9c87bb69047d7e7723603c61e56076893d
    Image:           public.ecr.aws/karpenter/controller:0.37.0@sha256:157f478f5db1fe999f5e2d27badcc742bf51cc470508b3cebe78224d0947674f
    Image ID:        public.ecr.aws/karpenter/controller@sha256:157f478f5db1fe999f5e2d27badcc742bf51cc470508b3cebe78224d0947674f
    Ports:           8000/TCP, 8081/TCP
    Host Ports:      0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    State:           Running
      Started:       Mon, 05 Aug 2024 15:01:01 -0400
    Ready:           True
    Restart Count:   0
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:      1
      memory:   1Gi
    Liveness:   http-get http://:http/healthz delay=30s timeout=30s period=10s #success=1 #failure=3
    Readiness:  http-get http://:http/readyz delay=5s timeout=30s period=10s #success=1 #failure=3
    Environment:
      KUBERNETES_MIN_VERSION:       1.19.0-0
      KARPENTER_SERVICE:            karpenter
      LOG_LEVEL:                    debug
      METRICS_PORT:                 8000
      HEALTH_PROBE_PORT:            8081
      SYSTEM_NAMESPACE:             karpenter (v1:metadata.namespace)
      MEMORY_LIMIT:                 1073741824 (limits.memory)
      FEATURE_GATES:                Drift=true,SpotToSpotConsolidation=false
      BATCH_MAX_DURATION:           10s
      BATCH_IDLE_DURATION:          1s
      ASSUME_ROLE_DURATION:         15m
      CLUSTER_NAME:                 <removed>
      CLUSTER_ENDPOINT:             <removed>
      VM_MEMORY_OVERHEAD_PERCENT:   0.075
      INTERRUPTION_QUEUE:           <removed>
      RESERVED_ENIS:                0
      AWS_STS_REGIONAL_ENDPOINTS:   regional
      AWS_DEFAULT_REGION:           us-west-2
      AWS_REGION:                   us-west-2
      AWS_ROLE_ARN:                 <removed>
      AWS_WEB_IDENTITY_TOKEN_FILE:  /var/run/secrets/eks.amazonaws.com/serviceaccount/token
    Mounts:
      /var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bp84c (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  aws-iam-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  86400
  kube-api-access-bp84c:
    Type:                     Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:   3607
    ConfigMapName:            kube-root-ca.crt
    ConfigMapOptional:        <nil>
    DownwardAPI:              true
QoS Class:                    Guaranteed
Node-Selectors:               kubernetes.io/os=linux
Tolerations:                  CriticalAddonsOnly op=Exists
                              node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Topology Spread Constraints:  topology.kubernetes.io/zone:ScheduleAnyway when max skew 1 is exceeded for selector app.kubernetes.io/instance=karpenter,app.kubernetes.io/name=karpenter
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  9m5s (x2 over 9m7s)  default-scheduler  0/9 nodes are available: 3 node(s) had untolerated taint {taskrabbit.io/core: }, 6 Insufficient cpu. preemption: 0/9 nodes are available: 3 Preemption is not helpful for scheduling, 6 Insufficient cpu.
  Normal   Scheduled         8m56s                default-scheduler  Successfully assigned karpenter/karpenter-678bcd88c7-7s5j8 to ip-10-200-188-252.us-west-2.compute.internal
  Normal   Pulled            8m55s                kubelet            Container image "public.ecr.aws/karpenter/controller:0.37.0@sha256:157f478f5db1fe999f5e2d27badcc742bf51cc470508b3cebe78224d0947674f" already present on machine
  Normal   Created           8m55s                kubelet            Created container controller
  Normal   Started           8m55s                kubelet            Started container controller

Here are the logs from this manage node group pod:

{"level":"DEBUG","time":"2024-08-05T19:01:01.644Z","logger":"controller","message":"discovered karpenter version","commit":"490ef94","version":"0.37.0"}
{"level":"DEBUG","time":"2024-08-05T19:01:01.751Z","logger":"controller","message":"discovered region","commit":"490ef94","region":"us-west-2"}
{"level":"DEBUG","time":"2024-08-05T19:01:01.751Z","logger":"controller","message":"discovered cluster endpoint","commit":"490ef94","cluster-endpoint":"https://E4B1A9F4B91B7DF5010F1644F3FF2171.gr7.us-west-2.eks.amazonaws.com"}
{"level":"DEBUG","time":"2024-08-05T19:01:01.758Z","logger":"controller","message":"discovered kube dns","commit":"490ef94","kube-dns-ip":"172.20.0.10"}
{"level":"INFO","time":"2024-08-05T19:01:01.791Z","logger":"controller","message":"webhook disabled","commit":"490ef94"}
{"level":"INFO","time":"2024-08-05T19:01:01.791Z","logger":"controller.controller-runtime.metrics","message":"Starting metrics server","commit":"490ef94"}
{"level":"INFO","time":"2024-08-05T19:01:01.791Z","logger":"controller.controller-runtime.metrics","message":"Serving metrics server","commit":"490ef94","bindAddress":":8000","secure":false}
{"level":"INFO","time":"2024-08-05T19:01:01.791Z","logger":"controller","message":"starting server","commit":"490ef94","name":"health probe","addr":"[::]:8081"}
{"level":"INFO","time":"2024-08-05T19:01:01.892Z","logger":"controller","message":"attempting to acquire leader lease karpenter/karpenter-leader-election...","commit":"490ef94"}

Pod Describe from a sad pod on Fargate

Name:                 karpenter-678bcd88c7-7wxhs
Namespace:            karpenter
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      karpenter
Node:                 <removed> (Fargate node)
Start Time:           Mon, 05 Aug 2024 15:15:12 -0400
Labels:               app.kubernetes.io/instance=karpenter
                      app.kubernetes.io/name=karpenter
                      eks.amazonaws.com/fargate-profile=karpenter
                      pod-template-hash=678bcd88c7
Annotations:          CapacityProvisioned: 1vCPU 2GB
                      Logging: LoggingEnabled
                      ad.datadoghq.com/controller.checks:
                        {
                          "karpenter": {
                            "init_config": {},
                            "instances": [
                              {
                                "openmetrics_endpoint": "http://%%host%%:8000/metrics"
                              }
                            ]
                          }
                        }
Status:               Running
IP:                   <removed>
IPs:
  IP:           <removed>
Controlled By:  ReplicaSet/karpenter-678bcd88c7
Containers:
  controller:
    Container ID:    containerd://16e855ea91dc1deea3a2ec53d4f8c5231f0f3ca1feaeee32d60c50d97c7592c2
    Image:           public.ecr.aws/karpenter/controller:0.37.0@sha256:157f478f5db1fe999f5e2d27badcc742bf51cc470508b3cebe78224d0947674f
    Image ID:        public.ecr.aws/karpenter/controller@sha256:157f478f5db1fe999f5e2d27badcc742bf51cc470508b3cebe78224d0947674f
    Ports:           8000/TCP, 8081/TCP
    Host Ports:      0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    State:           Running
      Started:       Mon, 05 Aug 2024 15:15:16 -0400
    Ready:           False
    Restart Count:   0
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:      1
      memory:   1Gi
    Liveness:   http-get http://:http/healthz delay=30s timeout=30s period=10s #success=1 #failure=3
    Readiness:  http-get http://:http/readyz delay=5s timeout=30s period=10s #success=1 #failure=3
    Environment:
      KUBERNETES_MIN_VERSION:       1.19.0-0
      KARPENTER_SERVICE:            karpenter
      LOG_LEVEL:                    debug
      METRICS_PORT:                 8000
      HEALTH_PROBE_PORT:            8081
      SYSTEM_NAMESPACE:             karpenter (v1:metadata.namespace)
      MEMORY_LIMIT:                 1073741824 (limits.memory)
      FEATURE_GATES:                Drift=true,SpotToSpotConsolidation=false
      BATCH_MAX_DURATION:           10s
      BATCH_IDLE_DURATION:          1s
      ASSUME_ROLE_DURATION:         15m
      CLUSTER_NAME:                 <removed>
      CLUSTER_ENDPOINT:             <removed>
      VM_MEMORY_OVERHEAD_PERCENT:   0.075
      INTERRUPTION_QUEUE:           <removed>
      RESERVED_ENIS:                0
      AWS_STS_REGIONAL_ENDPOINTS:   regional
      AWS_DEFAULT_REGION:           us-west-2
      AWS_REGION:                   us-west-2
      AWS_ROLE_ARN:                 <removed>
      AWS_WEB_IDENTITY_TOKEN_FILE:  /var/run/secrets/eks.amazonaws.com/serviceaccount/token
    Mounts:
      /var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rw5l8 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       False
  ContainersReady             False
  PodScheduled                True
Volumes:
  aws-iam-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  86400
  kube-api-access-rw5l8:
    Type:                     Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:   3607
    ConfigMapName:            kube-root-ca.crt
    ConfigMapOptional:        <nil>
    DownwardAPI:              true
QoS Class:                    Guaranteed
Node-Selectors:               kubernetes.io/os=linux
Tolerations:                  CriticalAddonsOnly op=Exists
                              node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Topology Spread Constraints:  topology.kubernetes.io/zone:ScheduleAnyway when max skew 1 is exceeded for selector app.kubernetes.io/instance=karpenter,app.kubernetes.io/name=karpenter
Events:
  Type     Reason          Age   From               Message
  ----     ------          ----  ----               -------
  Normal   LoggingEnabled  119s  fargate-scheduler  Successfully enabled logging for pod
  Normal   Scheduled       65s   fargate-scheduler  Successfully assigned karpenter/karpenter-678bcd88c7-7wxhs to fargate-ip-10-200-43-225.us-west-2.compute.internal
  Normal   Pulling         64s   kubelet            Pulling image "public.ecr.aws/karpenter/controller:0.37.0@sha256:157f478f5db1fe999f5e2d27badcc742bf51cc470508b3cebe78224d0947674f"
  Normal   Pulled          61s   kubelet            Successfully pulled image "public.ecr.aws/karpenter/controller:0.37.0@sha256:157f478f5db1fe999f5e2d27badcc742bf51cc470508b3cebe78224d0947674f" in 3.585s (3.585s including waiting). Image size: 46233780 bytes.
  Normal   Created         61s   kubelet            Created container controller
  Normal   Started         61s   kubelet            Started container controller
  Warning  Unhealthy       19s   kubelet            Readiness probe failed: Get "http://10.200.43.225:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy       1s    kubelet            Liveness probe failed: Get "http://10.200.43.225:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Logs from the sad pod:

> kubectl logs karpenter-678bcd88c7-7wxhs -n karpenter

{"level":"DEBUG","time":"2024-08-05T19:17:16.661Z","logger":"controller","message":"discovered karpenter version","commit":"490ef94","version":"0.37.0"}

And if i go ahead and delete the other two pods still running on managed node groups (3 replicas, above was a follower). Which means there is no leader and they should elect one. All three pods show this for logs with the failed checks.

> kubectl logs karpenter-678bcd88c7-hjxwn -n karpenter
{"level":"DEBUG","time":"2024-08-05T19:22:41.334Z","logger":"controller","message":"discovered karpenter version","commit":"490ef94","version":"0.37.0"}

> kubectl logs karpenter-678bcd88c7-khd7k -n karpenter
{"level":"DEBUG","time":"2024-08-05T19:23:37.071Z","logger":"controller","message":"discovered karpenter version","commit":"490ef94","version":"0.37.0"}

> kubectl logs karpenter-678bcd88c7-7wxhs -n karpenter
{"level":"DEBUG","time":"2024-08-05T19:23:16.601Z","logger":"controller","message":"discovered karpenter version","commit":"490ef94","version":"0.37.0"}

And the pod status

> kubectl get pods -n karpenter
NAME                         READY   STATUS    RESTARTS      AGE
karpenter-678bcd88c7-7wxhs   0/1     Running   5 (37s ago)   11m
karpenter-678bcd88c7-hjxwn   0/1     Running   1 (72s ago)   4m6s
karpenter-678bcd88c7-khd7k   0/1     Running   2 (16s ago)   5m10s
zswanson commented 3 months ago

I'm having consistent liveness failures on managed nodepools in us-east-1; I think its corresponding to the periodic DescribeInstanceTypes ... a manual adjustment to the liveness probe timeout from 30s to 60s resolves it completely but there's no way to set this in the chart.

udpate - this may have been caused by some sort of permissions/configuration issue on my part that unfortunately resulted in 0 log output from the controller; I now have Karpenter operational after a few minor tweaks to my config and was able to revert to the helm chart provided probe values. 🤷‍♂️

hahasheminejad commented 2 months ago

We faced a similar issue, which turned out to be a security group blocking DNS packets from reaching the CoreDNS running on standard EC2 instances in the cluster.

To diagnose the problem, I deployed a troubleshooting pod on the same Fargate profiles using nicolaka/netshoot. Running dig sts.amazonaws.com resulted in a timeout. This was critical because sts is the first service that Karpenter attempts to contact to obtain temporary credentials needed to interact with other AWS services like EC2.

After identifying the root cause, we decided to switch to the Node/VPC default DNS by using --set dnsPolicy=Default which also provides additional benefits.

See issues: #2186 and #4947 for DNS.

ronanbarrett commented 2 months ago

@hahasheminejad I have been performing the same debugging today using nicolaka/netshoot. The issue for me was connectivity between the fargate pods in EKS and the CoreDNS pods on the EC2 node groups. I updated the SG to allow UDP 53 from the EKS created security group applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads. That said setting --set dnsPolicy=Default is much simpler 👍

njtran commented 2 months ago

@kgochenour do any of the above answers help you? It sounds like leader election is your current area of focus, but I'd need a lot more information to understand how leader election may not be working on fargate vs mng. I'd recommend reaching out to the karpenter kubernetes slack and see if anyone's had the same issue as you.

Blunderchips commented 1 month ago

Same issue, setting --set dnsPolicy=Default had no effect. Even though I am running in debug mode there is, literally, on a single log message.

{"level":"DEBUG","time":"2024-09-25T16:31:23.091Z","logger":"controller","caller":"operator/operator.go:149","message":"discovered karpenter version","commit":"b897114","version":"1.0.2"}

@njtran is there any specific debug information you are looking for that I can pass on? (:

cblkwell commented 3 weeks ago

I'm running into what appears to be a very similar issue here -- I'm running 0.36.5, and when I start things up, I get more log messages; DNS is working because I can see it find things in debug mode, but it still constantly fails its health checks.

Log messages ``` {"level":"DEBUG","time":"2024-10-11T18:00:04.667Z","logger":"controller","message":"discovered karpenter version","commit":"487a6e0","version":"0.36.5"} {"level":"DEBUG","time":"2024-10-11T18:00:05.230Z","logger":"controller","message":"discovered region","commit":"487a6e0","region":"us-east-1"} {"level":"DEBUG","time":"2024-10-11T18:00:05.233Z","logger":"controller","message":"discovered cluster endpoint","commit":"487a6e0","cluster-endpoint":"https://REDACTED.us-east-1.eks.amazonaws.com"} {"level":"DEBUG","time":"2024-10-11T18:00:05.238Z","logger":"controller","message":"discovered kube dns","commit":"487a6e0","kube-dns-ip":"10.100.0.10"} {"level":"INFO","time":"2024-10-11T18:00:05.247Z","logger":"controller.controller-runtime.metrics","message":"Starting metrics server","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:00:05.249Z","logger":"controller.controller-runtime.metrics","message":"Serving metrics server","commit":"487a6e0","bindAddress":":8000","secure":false} {"level":"INFO","time":"2024-10-11T18:00:05.250Z","logger":"controller","message":"starting server","commit":"487a6e0","name":"health probe","addr":"[::]:8081"} {"level":"INFO","time":"2024-10-11T18:00:05.436Z","logger":"controller","message":"Starting informers...","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:00:06.631Z","logger":"controller","message":"attempting to acquire leader lease karpenter/karpenter-leader-election...","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:01:14.343Z","logger":"controller","message":"successfully acquired lease karpenter/karpenter-leader-election","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:01:14.344Z","logger":"controller.provisioner","message":"starting controller","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:01:14.344Z","logger":"controller.disruption.queue","message":"starting controller","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:01:14.344Z","logger":"controller.eviction-queue","message":"starting controller","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:01:14.344Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"provisioner.trigger.pod","controllerGroup":"","controllerKind":"Pod","source":"kind source: *v1.Pod"} {"level":"INFO","time":"2024-10-11T18:01:14.344Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"provisioner.trigger.pod","controllerGroup":"","controllerKind":"Pod"} {"level":"INFO","time":"2024-10-11T18:01:14.344Z","logger":"controller.disruption","message":"starting controller","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:01:14.345Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"provisioner.trigger.pod","controllerGroup":"","controllerKind":"Pod","worker count":10} {"level":"INFO","time":"2024-10-11T18:01:14.345Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodepool.hash","controllerGroup":"karpenter.sh","controllerKind":"NodePool","source":"kind source: *v1beta1.NodePool"} {"level":"INFO","time":"2024-10-11T18:01:14.345Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"nodepool.hash","controllerGroup":"karpenter.sh","controllerKind":"NodePool"} {"level":"INFO","time":"2024-10-11T18:01:14.346Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"provisioner.trigger.node","controllerGroup":"","controllerKind":"Node","source":"kind source: *v1.Node"} {"level":"INFO","time":"2024-10-11T18:01:14.346Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"provisioner.trigger.node","controllerGroup":"","controllerKind":"Node"} {"level":"INFO","time":"2024-10-11T18:01:14.346Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"state.node","controllerGroup":"","controllerKind":"Node","source":"kind source: *v1.Node"} {"level":"INFO","time":"2024-10-11T18:01:14.346Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"state.node","controllerGroup":"","controllerKind":"Node"} {"level":"INFO","time":"2024-10-11T18:01:14.346Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"state.daemonset","controllerGroup":"apps","controllerKind":"DaemonSet","source":"kind source: *v1.DaemonSet"} {"level":"INFO","time":"2024-10-11T18:01:14.346Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"state.daemonset","controllerGroup":"apps","controllerKind":"DaemonSet"} {"level":"INFO","time":"2024-10-11T18:01:14.347Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"state.nodepool","controllerGroup":"karpenter.sh","controllerKind":"NodePool","source":"kind source: *v1beta1.NodePool"} {"level":"INFO","time":"2024-10-11T18:01:14.347Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"state.nodepool","controllerGroup":"karpenter.sh","controllerKind":"NodePool"} {"level":"INFO","time":"2024-10-11T18:01:14.348Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"state.pod","controllerGroup":"","controllerKind":"Pod","source":"kind source: *v1.Pod"} {"level":"INFO","time":"2024-10-11T18:01:14.348Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"state.pod","controllerGroup":"","controllerKind":"Pod"} {"level":"INFO","time":"2024-10-11T18:01:14.348Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"node.termination","controllerGroup":"","controllerKind":"Node","source":"kind source: *v1.Node"} {"level":"INFO","time":"2024-10-11T18:01:14.349Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"node.termination","controllerGroup":"","controllerKind":"Node"} {"level":"INFO","time":"2024-10-11T18:01:14.349Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"state.nodeclaim","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *v1beta1.NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.349Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"state.nodeclaim","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.349Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"metrics.nodepool","controllerGroup":"karpenter.sh","controllerKind":"NodePool","source":"kind source: *v1beta1.NodePool"} {"level":"INFO","time":"2024-10-11T18:01:14.349Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"metrics.nodepool","controllerGroup":"karpenter.sh","controllerKind":"NodePool"} {"level":"INFO","time":"2024-10-11T18:01:14.349Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"metrics.pod","controllerGroup":"","controllerKind":"Pod","source":"kind source: *v1.Pod"} {"level":"INFO","time":"2024-10-11T18:01:14.349Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"metrics.pod","controllerGroup":"","controllerKind":"Pod"} {"level":"INFO","time":"2024-10-11T18:01:14.350Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodepool.counter","controllerGroup":"karpenter.sh","controllerKind":"NodePool","source":"kind source: *v1beta1.NodePool"} {"level":"INFO","time":"2024-10-11T18:01:14.350Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodepool.counter","controllerGroup":"karpenter.sh","controllerKind":"NodePool","source":"kind source: *v1beta1.NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.350Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodepool.counter","controllerGroup":"karpenter.sh","controllerKind":"NodePool","source":"kind source: *v1.Node"} {"level":"INFO","time":"2024-10-11T18:01:14.350Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"nodepool.counter","controllerGroup":"karpenter.sh","controllerKind":"NodePool"} {"level":"INFO","time":"2024-10-11T18:01:14.351Z","logger":"controller.metrics.node","message":"starting controller","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:01:14.351Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *v1beta1.NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.353Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *v1.Node"} {"level":"INFO","time":"2024-10-11T18:01:14.353Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.353Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclaim.consistency","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *v1beta1.NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.353Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclaim.consistency","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *v1.Node"} {"level":"INFO","time":"2024-10-11T18:01:14.353Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"nodeclaim.consistency","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.354Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclaim.termination","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *v1beta1.NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.354Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclaim.termination","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *v1.Node"} {"level":"INFO","time":"2024-10-11T18:01:14.354Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"nodeclaim.termination","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.354Z","logger":"controller.nodeclaim.garbagecollection","message":"starting controller","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:01:14.354Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"lease.garbagecollection","controllerGroup":"coordination.k8s.io","controllerKind":"Lease","source":"kind source: *v1.Lease"} {"level":"INFO","time":"2024-10-11T18:01:14.354Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"lease.garbagecollection","controllerGroup":"coordination.k8s.io","controllerKind":"Lease"} {"level":"INFO","time":"2024-10-11T18:01:14.355Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclaim.disruption","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *v1beta1.NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.355Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclaim.disruption","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *v1beta1.NodePool"} {"level":"INFO","time":"2024-10-11T18:01:14.355Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclaim.disruption","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *v1.Pod"} {"level":"INFO","time":"2024-10-11T18:01:14.355Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclaim.disruption","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *unstructured.Unstructured"} {"level":"INFO","time":"2024-10-11T18:01:14.355Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"nodeclaim.disruption","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.356Z","logger":"controller.nodeclaim.garbagecollection","message":"starting controller","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:01:14.359Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclass","controllerGroup":"karpenter.k8s.aws","controllerKind":"EC2NodeClass","source":"kind source: *v1beta1.EC2NodeClass"} {"level":"INFO","time":"2024-10-11T18:01:14.360Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclass","controllerGroup":"karpenter.k8s.aws","controllerKind":"EC2NodeClass","source":"kind source: *v1beta1.NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.360Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"nodeclass","controllerGroup":"karpenter.k8s.aws","controllerKind":"EC2NodeClass"} {"level":"INFO","time":"2024-10-11T18:01:14.360Z","logger":"controller.pricing","message":"starting controller","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:01:14.361Z","logger":"controller","message":"Starting EventSource","commit":"487a6e0","controller":"nodeclaim.tagging","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","source":"kind source: *v1beta1.NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.362Z","logger":"controller","message":"Starting Controller","commit":"487a6e0","controller":"nodeclaim.tagging","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim"} {"level":"INFO","time":"2024-10-11T18:01:14.633Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"state.node","controllerGroup":"","controllerKind":"Node","worker count":10} {"level":"INFO","time":"2024-10-11T18:01:14.633Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"provisioner.trigger.node","controllerGroup":"","controllerKind":"Node","worker count":10} {"level":"DEBUG","time":"2024-10-11T18:01:14.646Z","logger":"controller.disruption","message":"waiting on cluster sync","commit":"487a6e0"} {"level":"INFO","time":"2024-10-11T18:01:14.831Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"nodepool.hash","controllerGroup":"karpenter.sh","controllerKind":"NodePool","worker count":10} {"level":"DEBUG","time":"2024-10-11T18:01:14.931Z","logger":"controller","message":"hydrated launch template cache","commit":"487a6e0","tag-key":"karpenter.k8s.aws/cluster","tag-value":"REDACTED","count":0} {"level":"INFO","time":"2024-10-11T18:01:15.231Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"state.nodepool","controllerGroup":"karpenter.sh","controllerKind":"NodePool","worker count":10} {"level":"INFO","time":"2024-10-11T18:01:15.232Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"state.daemonset","controllerGroup":"apps","controllerKind":"DaemonSet","worker count":10} {"level":"INFO","time":"2024-10-11T18:01:15.232Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"lease.garbagecollection","controllerGroup":"coordination.k8s.io","controllerKind":"Lease","worker count":10} {"level":"INFO","time":"2024-10-11T18:01:15.232Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"state.pod","controllerGroup":"","controllerKind":"Pod","worker count":10} {"level":"INFO","time":"2024-10-11T18:01:15.232Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"node.termination","controllerGroup":"","controllerKind":"Node","worker count":100} {"level":"INFO","time":"2024-10-11T18:01:15.232Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"state.nodeclaim","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","worker count":10} {"level":"INFO","time":"2024-10-11T18:01:15.234Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"metrics.nodepool","controllerGroup":"karpenter.sh","controllerKind":"NodePool","worker count":1} {"level":"INFO","time":"2024-10-11T18:01:15.234Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"metrics.pod","controllerGroup":"","controllerKind":"Pod","worker count":1} {"level":"INFO","time":"2024-10-11T18:01:15.234Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"nodepool.counter","controllerGroup":"karpenter.sh","controllerKind":"NodePool","worker count":10} {"level":"INFO","time":"2024-10-11T18:01:15.234Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"nodeclaim.consistency","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","worker count":10} {"level":"INFO","time":"2024-10-11T18:01:15.234Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"nodeclaim.termination","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","worker count":100} {"level":"INFO","time":"2024-10-11T18:01:15.333Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"nodeclass","controllerGroup":"karpenter.k8s.aws","controllerKind":"EC2NodeClass","worker count":10} {"level":"INFO","time":"2024-10-11T18:01:15.333Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"nodeclaim.tagging","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","worker count":1} {"level":"INFO","time":"2024-10-11T18:01:15.333Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"nodeclaim.lifecycle","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","worker count":1000} {"level":"INFO","time":"2024-10-11T18:01:15.334Z","logger":"controller","message":"Starting workers","commit":"487a6e0","controller":"nodeclaim.disruption","controllerGroup":"karpenter.sh","controllerKind":"NodeClaim","worker count":10} {"level":"DEBUG","time":"2024-10-11T18:01:16.452Z","logger":"controller.nodeclass","message":"discovered subnets","commit":"487a6e0","ec2nodeclass":"ci-bottlerocket","subnets":["subnet-REDACTED (us-east-1a)","subnet-REDACTED (us-east-1c)","subnet-REDACTED (us-east-1b)"]} {"level":"DEBUG","time":"2024-10-11T18:01:16.733Z","logger":"controller.disruption","message":"waiting on cluster sync","commit":"487a6e0"} {"level":"DEBUG","time":"2024-10-11T18:01:16.857Z","logger":"controller.nodeclass","message":"discovered security groups","commit":"487a6e0","ec2nodeclass":"ci-bottlerocket","security-groups":["sg-REDACTED","sg-REDACTED"]} {"level":"DEBUG","time":"2024-10-11T18:01:17.533Z","logger":"controller.nodeclass","message":"discovered amis","commit":"487a6e0","ec2nodeclass":"ci-bottlerocket","ids":"ami-0821c83dacd34f69e","count":1} {"level":"DEBUG","time":"2024-10-11T18:01:17.634Z","logger":"controller.nodeclass","message":"discovered subnets","commit":"487a6e0","ec2nodeclass":"rl-bottlerocket","subnets":["subnet-REDACTED (us-east-1b)","subnet-REDACTED (us-east-1c)","subnet-REDACTED (us-east-1a)"]} {"level":"DEBUG","time":"2024-10-11T18:01:17.636Z","logger":"controller.nodeclass","message":"discovered security groups","commit":"487a6e0","ec2nodeclass":"rl-bottlerocket","security-groups":["sg-REDACTED","sg-REDACTED"]} {"level":"DEBUG","time":"2024-10-11T18:01:17.642Z","logger":"controller.nodeclass","message":"discovered security groups","commit":"487a6e0","ec2nodeclass":"system-bottlerocket","security-groups":["sg-REDACTED","sg-REDACTED"]} ```
kgochenour commented 3 weeks ago

To follow up from my original issue, the issue was in fact DNS for me. Using the fix noted in #2186 of setting dnsPolicy=Default got Karpenter working.

We ultimately moved away from using Fargate, and instead force Karpenter to a Managed Node Group on EKS. This is because using Fargate broke all the Datadog Integrations and logging we had for Karpenter. We could still get the information from Fargate with Datadog's Fargate information. However, it changed too many metric and field names for us, so we went back to good ole EC2.

@cblkwell I'm going to close this issue as my issue was resolved. Your logs look different than mine as well potentially pointing to another issue. Also try the Karpenter channel in Kubernetes slack. Bunch of smart folks there too.

Happy Karpentering.

cblkwell commented 3 weeks ago

Thanks for letting me know, I'll do that.