kumahq / kuma

🐻 The multi-zone service mesh for containers, Kubernetes and VMs. Built with Envoy. CNCF Sandbox Project.
https://kuma.io/install
Apache License 2.0
3.67k stars 333 forks source link

[injector] Regression: `spec.initContainers[0].name: Required value` #11993

Open voidlily opened 1 day ago

voidlily commented 1 day ago

What happened?

Warning  FailedCreate     77s                replicaset-controller  Error creating: Pod "lrtest-dashboard-6bbf859f6-p4ltc" is invalid: [spec.initContainers[0].name: Required value, spec.initContainers[0].image: Required value]

I don't know exactly why this is happening yet, but it started happening to me in 2.9.0. Previously I was on 2.8.3 where I was not having issues with sidecar injection. This seems to happen to me regardless if I use injection labels on the namespace, or the Deployment.

kuma values.yaml

dataPlane:
  dnsLogging: true
controlPlane:
  autoscaling:
    enabled: true
    minReplicas: 2
  resources:
    requests:
       cpu: 200m
       memory: 512Mi
    limits:
       memory: 512Mi
  tolerations:
    - key: ondemand
      operator: Equal
      value: "true"
      effect: NoSchedule
  egress:
    enabled: true
  envVars:
    KUMA_RUNTIME_KUBERNETES_INJECTOR_IGNORED_SERVICE_SELECTOR_LABELS: rollouts-pod-template-hash
    KUMA_RUNTIME_KUBERNETES_SKIP_MESH_OWNER_REFERENCE: true
    # for kuma CNI, because eks disables ipv6 by default
    KUMA_RUNTIME_KUBERNETES_INJECTOR_SIDECAR_CONTAINER_IP_FAMILY_MODE: ipv4
    KUMA_RUNTIME_KUBERNETES_INJECTOR_CONTAINER_PATCHES: dp-resources

# https://kuma.io/docs/2.8.x/production/dp-config/cni/
cni:
  enabled: true
  chained: true
  netDir: /etc/cni/net.d
  binDir: /opt/cni/bin
  confName: 10-aws.conflist
  resources:
    requests:
      cpu: 10m
      memory: 100Mi
    limits:
      memory: 100Mi

dp-resources patch referenced in the values.yaml

# https://docs.konghq.com/mesh/latest/introduction/kuma-requirements/
apiVersion: kuma.io/v1alpha1
kind: ContainerPatch
metadata:
  name: dp-resources
  namespace: kuma-system
spec:
  sidecarPatch:
    - op: add
      path: /resources/requests
      value: '{
        "cpu": "50m",
        "memory": "256Mi"
      }'
    - op: add
      path: /resources/limits
      value: '{
        "memory": "256Mi"
      }'

I tried on 2.9.0 disabling this patch as well but it didn't make a difference.

slavogiez commented 17 hours ago

Hi there, we have the same issue with the kong-dataplane deployment. No ContainerPatch on our side but the kong-dataplane already has another init container.

lahabana commented 17 hours ago

Could this be the issue fixed here already: https://github.com/kumahq/kuma/pull/11922 ?

voidlily commented 12 hours ago

Hi there, we have the same issue with the kong-dataplane deployment. No ContainerPatch on our side but the kong-dataplane already has another init container.

Do you also run in CNI mode? That might be the culprit and #11922 might be the fix after all

lahabana commented 10 hours ago

@voidlily or @slavogiez do you have a repro in a non risky environment?

If yes I'd love it if you could confirm that the preview artifact: 2.9.1-preview.v9beda2b29 indeed fixes you issues. Here's how to use these preview versions: https://kuma.io/docs/2.9.x/community/contribute-to-kuma/#testing-unreleased-versions

voidlily commented 9 hours ago

yeah looks like that fixed it, easy enough to test as well by changing global.image.tag to that preview commit, thanks for publishing preview images on docker hub!