pulumi / pulumi-eks

A Pulumi component for easily creating and managing an Amazon EKS Cluster
https://www.pulumi.com/registry/packages/eks/
Apache License 2.0
168 stars 76 forks source link

resource mapping not found for name: "eniconfigs.crd.k8s.amazonaws.com" #870

Open MayureshGharat opened 1 year ago

MayureshGharat commented 1 year ago

What happened?

I am using Pulumi to deploy EKS cluster in AWS.

When I run Pulumi up, I see the following error: ├─ eks:index:VpcCni cache-work-eks-prod-vpc-cni **creating failed** 1 error

Diagnostics:
  pulumi:pulumi:Stack (eks-eks-prod):
    Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead
    Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[1].key: beta.kubernetes.io/arch is deprecated since v1.14; use "kubernetes.io/arch" instead
    error: resource mapping not found for name: "eniconfigs.crd.k8s.amazonaws.com" namespace: "" from "/var/folders/t6/zryv0n4s6wd5w39k8kqt8f880000gn/T/tmp-40147bhaoYlrisNwt.tmp": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
    ensure CRDs are installed first

    error: update failed

  eks:index:VpcCni (cache-workloads-eks-prod-vpc-cni):
    error: Command failed: kubectl apply -f /var/folders/t6/zryv0n4s6wd5w39k8kqt8f880000gn/T/tmp-40147bhaoYlrisNwt.tmp
    Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead
    Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[1].key: beta.kubernetes.io/arch is deprecated since v1.14; use "kubernetes.io/arch" instead
    error: resource mapping not found for name: "eniconfigs.crd.k8s.amazonaws.com" namespace: "" from "/var/folders/t6/zryv0n4s6wd5w39k8kqt8f880000gn/T/tmp-40147bhaoYlrisNwt.tmp": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
    ensure CRDs are installed first

cat tmp-40147bhaoYlrisNwt.tmp

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: aws-node
rules:
  - apiGroups:
      - crd.k8s.amazonaws.com
    resources:
      - eniconfigs
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - pods
      - namespaces
    verbs:
      - list
      - watch
      - get
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - list
      - watch
      - get
      - update
  - apiGroups:
      - extensions
    resources:
      - '*'
    verbs:
      - list
      - watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: aws-node
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: aws-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: aws-node
subjects:
  - kind: ServiceAccount
    name: aws-node
    namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: aws-node
  namespace: kube-system
  labels:
    k8s-app: aws-node
spec:
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 10%
  selector:
    matchLabels:
      k8s-app: aws-node
  template:
    metadata:
      labels:
        k8s-app: aws-node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
                      - arm64
                  - key: eks.amazonaws.com/compute-type
                    operator: NotIn
                    values:
                      - fargate
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
                      - arm64
                  - key: eks.amazonaws.com/compute-type
                    operator: NotIn
                    values:
                      - fargate
      containers:
        - env:
            - name: MY_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: WARM_ENI_TARGET
              value: '1'
            - name: AWS_VPC_K8S_CNI_LOGLEVEL
              value: DEBUG
            - name: AWS_VPC_K8S_CNI_LOG_FILE
              value: /host/var/log/aws-routed-eni/ipamd.log
            - name: AWS_VPC_K8S_CNI_VETHPREFIX
              value: eni
            - name: AWS_VPC_ENI_MTU
              value: '9001'
            - name: AWS_VPC_K8S_PLUGIN_LOG_LEVEL
              value: DEBUG
            - name: AWS_VPC_K8S_PLUGIN_LOG_FILE
              value: /var/log/aws-routed-eni/plugin.log
            - name: ENABLE_POD_ENI
              value: 'false'
            - name: AWS_VPC_K8S_CNI_CONFIGURE_RPFILTER
              value: 'false'
            - name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
              value: 'false'
            - name: AWS_VPC_K8S_CNI_EXTERNALSNAT
              value: 'false'
          image: '602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.7.5'
          imagePullPolicy: Always
          livenessProbe:
            exec:
              command:
                - /app/grpc-health-probe
                - '-addr=:50051'
            initialDelaySeconds: 60
          name: aws-node
          ports:
            - containerPort: 61678
              name: metrics
          readinessProbe:
            exec:
              command:
                - /app/grpc-health-probe
                - '-addr=:50051'
            initialDelaySeconds: 1
          resources:
            requests:
              cpu: 10m
          securityContext:
            capabilities:
              add:
                - NET_ADMIN
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
            - mountPath: /host/var/log/aws-routed-eni
              name: log-dir
            - mountPath: /var/run/aws-node
              name: run-dir
            - mountPath: /var/run/dockershim.sock
              name: dockershim
            - mountPath: /run/xtables.lock
              name: xtables-lock
      hostNetwork: true
      initContainers:
        - env:
            - name: DISABLE_TCP_EARLY_DEMUX
              value: 'false'
          image: >-
            602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni-init:v1.7.5
          imagePullPolicy: Always
          name: aws-vpc-cni-init
          securityContext:
            privileged: true
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
      priorityClassName: system-node-critical
      serviceAccountName: aws-node
      terminationGracePeriodSeconds: 10
      tolerations:
        - operator: Exists
      volumes:
        - hostPath:
            path: /opt/cni/bin
          name: cni-bin-dir
        - hostPath:
            path: /etc/cni/net.d
          name: cni-net-dir
        - hostPath:
            path: /var/run/dockershim.sock
          name: dockershim
        - hostPath:
            path: /run/xtables.lock
          name: xtables-lock
        - hostPath:
            path: /var/log/aws-routed-eni
            type: DirectoryOrCreate
          name: log-dir
        - hostPath:
            path: /var/run/aws-node
            type: DirectoryOrCreate
          name: run-dir
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: eniconfigs.crd.k8s.amazonaws.com
spec:
  group: crd.k8s.amazonaws.com
  names:
    kind: ENIConfig
    plural: eniconfigs
    singular: eniconfig
  scope: Cluster
  versions:
    - name: v1alpha1
      served: true
      storage: true

Expected Behavior

Pulumi should successfully setup the cluster.

Steps to reproduce

My dependencies look like these:

package.json:

{
  "name": "cache-pulumi",
  "devDependencies": {
    "@types/node": "^14",
    "@typescript-eslint/eslint-plugin": "^5.13.0",
    "@typescript-eslint/parser": "^5.13.0",
    "eslint": "^8.10.0",
    "eslint-config-prettier": "^8.4.0",
    "eslint-plugin-prettier": "^4.0.0",
    "husky": "^7.0.4",
    "lint-staged": "^12.4.0",
    "prettier": "^2.5.1",
    "typescript": "^4.5.5"
  },
  "dependencies": {
    "@pulumi/aws": "^5.6.0",
    "@pulumi/awsx": "^0.40.0",
    "@pulumi/eks": "^1.0.0",
    "@pulumi/kubernetes": "^3.21.0",
    "@pulumi/pulumi": "^3.34.0"
  },
  "lint-staged": {
    "*.ts": "eslint --cache --cache-location ./node_modules/.cache/.eslintcache --fix"
  },
  "scripts": {
    "prepare": "husky install"
  }
}

My AWS-CLI version:

aws --version
aws-cli/2.7.11 Python/3.9.11 Darwin/22.3.0 exe/x86_64 prompt/off

My kubectl version:

kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.26.3
Kustomize Version: v4.5.7
Unable to connect to the server: dial tcp: lookup 6C20F01740453585F4F70D2F9D525EEA.gr7.us-east-1.eks.amazonaws.com: no such host

code:

const workloadsVpc = new awsx.ec2.Vpc(`${name}-vpc`, {
  cidrBlock: "10.1.0.0/16",
  subnets: [
    {
      type: "private",
      cidrMask: 24,
      tags: {
        [clusterTag]: "owned",
        "kubernetes.io/role/internal-elb": "1",
      },
    },
    {
      type: "public",
      cidrMask: 24,
      tags: {
        [clusterTag]: "owned",
        "kubernetes.io/role/elb": "1",
      },
    },
  ],
  tags: {
    Name: `${name}-vpc`,
  },
});

const cluster = new eks.Cluster(name, {
  name: clusterName,
  vpcId: workloadsVpc.id,
  privateSubnetIds: workloadsVpc.privateSubnetIds,
  publicSubnetIds: workloadsVpc.publicSubnetIds,
  desiredCapacity: 3,
  maxSize: 4,
  instanceType: "t3.medium",
  providerCredentialOpts: { profileName: process.env.AWS_PROFILE },
  createOidcProvider: true,
});

Output of pulumi about

pulumi about CLI Version 3.50.0 Go Version go1.19.4 Go Compiler gc

Host OS darwin Version 13.2 Arch arm64

Pulumi locates its logs in /var/folders/t6/zryv0n4s6wd5w39k8kqt8f880000gn/T/ by default warning: Failed to read project: no Pulumi.yaml project file found (searching upwards from /Users/mayureshgharat/usecache/Pulumi_latest/pulumi). If you have not created a project yet, use pulumi new to do so: no project file found warning: Could not access the backend: unable to check if bucket s3://cache-pulumi-ce18dab is accessible: blob (code=Unknown): MissingRegion: could not find region configuration warning: A new version of Pulumi is available. To upgrade from version '3.50.0' to '3.60.0', run $ brew upgrade pulumi or visit https://pulumi.com/docs/reference/install/ for manual instructions and release notes.

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

MayureshGharat commented 1 year ago

setting the K8 version to 1.21, explicitly does not face this issue:

const cluster = new eks.Cluster(name, {
  version: "1.21",
  name: clusterName,
  vpcId: workloadsVpc.id,
  privateSubnetIds: workloadsVpc.privateSubnetIds,
  publicSubnetIds: workloadsVpc.publicSubnetIds,
  desiredCapacity: 3,
  maxSize: 4,
  instanceType: "t3.medium",
  providerCredentialOpts: { profileName: process.env.AWS_PROFILE },
  createOidcProvider: true,
});
rquitales commented 1 year ago

@MayureshGharat Thanks for reporting this issue in detail! Would you also be able to let me know what version of Kubernetes your cluster is on (since kubectl is not outputting that in your report).

Could you also let me know what your path is in terms of using this Pulumi program. Was this something that had been previously working, then stopped working? Or is your setup completely new and you're trying to spin up a new EKS cluster.

MayureshGharat commented 1 year ago

@MayureshGharat Thanks for reporting this issue in detail! Would you also be able to let me know what version of Kubernetes your cluster is on (since kubectl is not outputting that in your report).

Could you also let me know what your path is in terms of using this Pulumi program. Was this something that had been previously working, then stopped working? Or is your setup completely new and you're trying to spin up a new EKS cluster.

@rquitales it was deploying 1.25 on AWS EKS by default.

rquitales commented 1 year ago

I'm still looking through this issue, but wanted to provide some updates as well.

I was not able to recreate the error you faced with the code you've provided unfortunately. I've used your Pulumi code, and provided package.json to try and recreate the environment you have. I've also tried using the older version of the plugins/SDKs and simulated an update to no avail.

Here's my analysis of what appears to be happening from the logs you provided: So for some reason, your Pulumi program appears to still be using an older Provider to spin up the cluster. resource mapping not found for name: "eniconfigs.crd.k8s.amazonaws.com" is given because on clusters with Kubernetes v1.22 and above, CRDs are now part of the apiextensions.k8s.io/v1 api version instead of apiextensions.k8s.io/v1beta1 which we used in the older pulumi-eks versions. When you run your program, it tries to apply the eniconfig CRD with version apiextensions.k8s.io/v1beta1, and this will fail to apply on Kubernetes v1.22 and higher. This ultimately results in failing to apply the actual eniconfig custom resources as well.

Setting the version explicitly to 1.21 works for you since Kubernetes v1.21 supports CRDs with api version of apiextensions.k8s.io/v1beta1. We'll need to see why your environment seems to still be using the older provider.

@MayureshGharat Could you help provide a bit more information about your environment:

  1. re-run pulumi about within your Pulumi code working directory. This way, we can get the versions of the plugins used.
  2. Can you provide the output of ls ~/.pulumi/plugins.
  3. From the source working directory, provide the output ofcat node_modules/@pulumi/eks/cni/aws-k8s-cni.yaml
MayureshGharat commented 1 year ago

Hi @rquitales thanks a lot for looking into this. Please find the requested details below:

pulumi about

CLI
Version      3.50.0
Go Version   go1.19.4
Go Compiler  gc

Plugins
NAME    VERSION
nodejs  unknown

Host
OS       darwin
Version  13.2
Arch     arm64

This project is written in nodejs: executable='/Users/mayureshgharat/.nvm/versions/node/v16.14.2/bin/node' version='v16.14.2'

Pulumi locates its logs in /var/folders/t6/zryv0n4s6wd5w39k8kqt8f880000gn/T/ by default
warning: Failed to get information about the Pulumi program's dependencies: could not find either /Users/mayureshgharat/usecache/Pulumi_latest/pulumi/infra/eks/yarn.lock or /Users/mayureshgharat/usecache/Pulumi_latest/pulumi/infra/eks/package-lock.json
warning: Could not access the backend: unable to check if bucket s3://cache-pulumi-ce18dab is accessible: blob (code=Unknown): MissingRegion: could not find region configuration
warning: A new version of Pulumi is available. To upgrade from version '3.50.0' to '3.60.1', run
   $ brew upgrade pulumi
or visit https://pulumi.com/docs/reference/install/ for manual instructions and release notes.

ls ~/.pulumi/plugins

resource-aws-v4.38.0             resource-aws-v5.10.0.lock        resource-aws-v5.2.0              resource-aws-v5.9.1.lock         resource-docker-v3.4.1           resource-kafka-v3.3.0.lock       resource-kubernetes-v3.19.4
resource-aws-v4.38.0.lock        resource-aws-v5.16.2             resource-aws-v5.2.0.lock         resource-awsx-v1.0.2             resource-docker-v3.4.1.lock      resource-kubernetes-v3.16.0      resource-kubernetes-v3.19.4.lock
resource-aws-v4.38.1             resource-aws-v5.16.2.lock        resource-aws-v5.20.0             resource-awsx-v1.0.2.lock        resource-docker-v3.5.0           resource-kubernetes-v3.16.0.lock resource-kubernetes-v3.21.2
resource-aws-v4.38.1.lock        resource-aws-v5.17.0             resource-aws-v5.20.0.lock        resource-docker-v3.1.0           resource-docker-v3.5.0.lock      resource-kubernetes-v3.17.0      resource-kubernetes-v3.21.2.lock
resource-aws-v5.1.0              resource-aws-v5.17.0.lock        resource-aws-v5.33.0             resource-docker-v3.1.0.lock      resource-docker-v3.6.1           resource-kubernetes-v3.17.0.lock resource-kubernetes-v3.24.2
resource-aws-v5.1.0.lock         resource-aws-v5.18.0             resource-aws-v5.33.0.lock        resource-docker-v3.2.0           resource-docker-v3.6.1.lock      resource-kubernetes-v3.18.1      resource-kubernetes-v3.24.2.lock
resource-aws-v5.1.2              resource-aws-v5.18.0.lock        resource-aws-v5.9.0              resource-docker-v3.2.0.lock      resource-eks-v0.37.1             resource-kubernetes-v3.18.1.lock
resource-aws-v5.1.2.lock         resource-aws-v5.19.0             resource-aws-v5.9.0.lock         resource-docker-v3.4.0           resource-eks-v0.37.1.lock        resource-kubernetes-v3.18.2
resource-aws-v5.10.0             resource-aws-v5.19.0.lock        resource-aws-v5.9.1              resource-docker-v3.4.0.lock      resource-kafka-v3.3.0            resource-kubernetes-v3.18.2.lock

cat node_modules/@pulumi/eks/cni/aws-k8s-cni.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
    name: aws-node
rules:
    - apiGroups:
          - crd.k8s.amazonaws.com
      resources:
          - eniconfigs
      verbs:
          - get
          - list
          - watch
    - apiGroups: [""]
      resources:
          - pods
          - namespaces
      verbs: ["list", "watch", "get"]
    - apiGroups: [""]
      resources:
          - nodes
      verbs:
          - list
          - watch
          - get
          - update
    - apiGroups:
          - extensions
      resources:
          - "*"
      verbs:
          - list
          - watch

---
apiVersion: v1
kind: ServiceAccount
metadata:
    name: aws-node
    namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
    name: aws-node
roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: aws-node
subjects:
    - kind: ServiceAccount
      name: aws-node
      namespace: kube-system

---
kind: DaemonSet
apiVersion: apps/v1
metadata:
    name: aws-node
    namespace: kube-system
    labels:
        app.kubernetes.io/name: aws-node
        app.kubernetes.io/instance: aws-vpc-cni
        k8s-app: aws-node
        app.kubernetes.io/version: "v1.11.0"
spec:
    updateStrategy:
        rollingUpdate:
            maxUnavailable: 10%
        type: RollingUpdate
    selector:
        matchLabels:
            k8s-app: aws-node
    template:
        metadata:
            labels:
                app.kubernetes.io/name: aws-node
                app.kubernetes.io/instance: aws-vpc-cni
                k8s-app: aws-node
        spec:
            priorityClassName: "system-node-critical"
            serviceAccountName: aws-node
            hostNetwork: true
            initContainers:
                - name: aws-vpc-cni-init
                  image: "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni-init:v1.11.0"
                  env: []
                  securityContext:
                      privileged: true
                  volumeMounts:
                      - mountPath: /host/opt/cni/bin
                        name: cni-bin-dir
            terminationGracePeriodSeconds: 10
            tolerations:
                - operator: Exists
            securityContext: {}
            containers:
                - name: aws-node
                  image: "602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.11.0"
                  ports:
                      - containerPort: 61678
                        name: metrics
                  livenessProbe:
                      exec:
                          command:
                              - /app/grpc-health-probe
                              - -addr=:50051
                              - -connect-timeout=5s
                              - -rpc-timeout=5s
                      initialDelaySeconds: 60
                      timeoutSeconds: 10
                  readinessProbe:
                      exec:
                          command:
                              - /app/grpc-health-probe
                              - -addr=:50051
                              - -connect-timeout=5s
                              - -rpc-timeout=5s
                      initialDelaySeconds: 1
                      timeoutSeconds: 10
                  env:
                      - name: ADDITIONAL_ENI_TAGS
                        value: "{}"
                      - name: AWS_VPC_K8S_CNI_CONFIGURE_RPFILTER
                        value: "false"
                      - name: AWS_VPC_K8S_CNI_RANDOMIZESNAT
                        value: "prng"
                      - name: DISABLE_INTROSPECTION
                        value: "false"
                      - name: DISABLE_METRICS
                        value: "false"
                      - name: DISABLE_NETWORK_RESOURCE_PROVISIONING
                        value: "false"
                      - name: WARM_PREFIX_TARGET
                        value: "1"
                      - name: MY_NODE_NAME
                        valueFrom:
                            fieldRef:
                                fieldPath: spec.nodeName
                  resources:
                      requests:
                          cpu: 10m
                  securityContext:
                      capabilities:
                          add:
                              - NET_ADMIN
                  volumeMounts:
                      - mountPath: /host/opt/cni/bin
                        name: cni-bin-dir
                      - mountPath: /host/etc/cni/net.d
                        name: cni-net-dir
                      - mountPath: /host/var/log/aws-routed-eni
                        name: log-dir
                      - mountPath: /var/run/dockershim.sock
                        name: dockershim
                      - mountPath: /var/run/aws-node
                        name: run-dir
                      - mountPath: /run/xtables.lock
                        name: xtables-lock
            volumes:
                - name: cni-bin-dir
                  hostPath:
                      path: /opt/cni/bin
                - name: cni-net-dir
                  hostPath:
                      path: /etc/cni/net.d
                - name: dockershim
                  hostPath:
                      path: /var/run/dockershim.sock
                - name: log-dir
                  hostPath:
                      path: /var/log/aws-routed-eni
                      type: DirectoryOrCreate
                - name: run-dir
                  hostPath:
                      path: /var/run/aws-node
                      type: DirectoryOrCreate
                - name: xtables-lock
                  hostPath:
                      path: /run/xtables.lock
            affinity:
                nodeAffinity:
                    requiredDuringSchedulingIgnoredDuringExecution:
                        nodeSelectorTerms:
                            - matchExpressions:
                                  - key: kubernetes.io/os
                                    operator: In
                                    values:
                                        - linux
                                  - key: kubernetes.io/arch
                                    operator: In
                                    values:
                                        - amd64
                                        - arm64
                                  - key: eks.amazonaws.com/compute-type
                                    operator: NotIn
                                    values:
                                        - fargate
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
    name: eniconfigs.crd.k8s.amazonaws.com
    labels:
        app.kubernetes.io/name: aws-node
        app.kubernetes.io/instance: aws-vpc-cni
        k8s-app: aws-node
spec:
    scope: Cluster
    group: crd.k8s.amazonaws.com
    preserveUnknownFields: false
    versions:
        - name: v1alpha1
          served: true
          storage: true
          schema:
              openAPIV3Schema:
                  type: object
                  x-kubernetes-preserve-unknown-fields: true
    names:
        plural: eniconfigs
        singular: eniconfig
        kind: ENIConfig
mikhailshilkov commented 10 months ago

@rquitales Could you please take another look based on the latest info?

bitofsky commented 1 week ago

I am also facing the same issue.

image image

In my case, it occurred when upgrading from pulumi-eks 2.30.0 to 2.40.0. (EKS v1.29)

The changes in 2.40.0 include this pull request: https://github.com/pulumi/pulumi-eks/pull/1136 This change modifies the exec args for the eks kubeconfig.

If you have an eks created with versions below 2.30.0 and then run pulumi up after upgrading to pulumi-eks 2.40.0, the issue is reproduced.

Although I am not sure of the exact root cause, it seems that the update to the eks provider due to changes in the eks kubeconfig causes eks resources that were using the old eks Kubernetes version to be recognized as outdated. Consequently, it tries to change resources like aws-node and addons to deprecated Kubernetes API resources.

Here’s how I worked around this issue while upgrading pulumi-eks:

  1. Downgrade to pulumi-eks 2.30.0 or below.
  2. Export the Pulumi stack: pulumi stack export --file x.json
  3. Find and replace:
    • args\":[\"eks\",\"get-token\",\"--cluster-name\",\"xxx\"]
    • args\":[\"eks\",\"get-token\",\"--cluster-name\",\"xxx\",\"--output\",\"json\"]
  4. Import the modified stack: pulumi stack import --file x.json
  5. Upgrade to pulumi-eks 2.71.0.
  6. Run pulumi up.

By doing this, the provider changes are eliminated, and the Kubernetes version is correctly recognized, preventing the issue from occurring. As long as provider changes are avoided, the current Kubernetes version is correctly recognized during addon installation/removal, and everything functions properly.

I suspect that this issue is related to the provider change, but I am still unsure of the exact cause. For now, I am using this workaround.