terraform-aws-modules / terraform-aws-eks

Terraform module to create Amazon Elastic Kubernetes (EKS) resources πŸ‡ΊπŸ‡¦
https://registry.terraform.io/modules/terraform-aws-modules/eks/aws
Apache License 2.0
4.46k stars 4.07k forks source link

upgrade cluster from 1.30 to 1.31 #3192

Open karmops opened 1 week ago

karmops commented 1 week ago
module "eks-dev" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.26.0"
  cluster_version = "1.31" # just changed from 1.30 to 1.31
  ...
 }

I have deleted the .terraform folder and then run it again, but I end up with the same: InvalidParameterException: Cluster has incorrect Identity Provider URL configuration. The Identity Provider URL cannot be the same as the OpenID Connect (OIDC) issuer URL. Please fix the Identity Provider configuration before updating the cluster.

bryantbiggs commented 1 week ago

That's not the full configuration though

karmops commented 1 week ago
module "eks-dev" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.26.0"

  cluster_name    = "..."
  cluster_version = "1.30"

  cluster_endpoint_public_access  = true
  cluster_endpoint_private_access = true

  cluster_addons = {
    coredns = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
      configuration_values = jsonencode({
        env = {
          ENABLE_PREFIX_DELEGATION = "true"
          WARM_PREFIX_TARGET       = "1"
        }
      })
      timeouts = {
        create = "30m"
        update = "30m"
        delete = "30m"
      }
    }
    aws-ebs-csi-driver = {
      most_recent = true
      service_account_role_arn = module.ebs_csi_irsa_role.iam_role_arn
    }
  }

  vpc_id     = "..."
  subnet_ids = [
    "...",
  ]

  create_kms_key = false
  cluster_encryption_config = {
    provider_key_arn = aws_kms_key.eks.arn
    resources        = ["secrets"]
  }

  # EKS Managed Node Group(s)
  eks_managed_node_groups = {
    arm_core = {
      name = "arm-core-services"
      instance_types = ["m7g.xlarge"]
      capacity_type  = "SPOT"
      ami_type       = "AL2023_ARM_64_STANDARD"

      min_size     = 2
      max_size     = 4
      desired_size = 2

      disk_size = 60

      labels = {
        nodegroup-type = "arm-core-services-spot"
      }
    }

    arm_applications = {
      name = "arm-applications"
      instance_types = ["m7g.xlarge"]
      capacity_type  = "SPOT"
      ami_type       = "AL2023_ARM_64_STANDARD"

      min_size     = 3
      max_size     = 4
      desired_size = 3

      disk_size = 60

      labels = {
        nodegroup-type = "arm-applications-spot"
      }
    }
  }

  //enable_cluster_creator_admin_permissions = true
  cluster_enabled_log_types = [
    "audit",
  ]

  tags = {
    Environment = "dev"
    Agent       = "terraform"
  }
}
bryantbiggs commented 1 week ago

That doesn't reproduce the issue

wanglinsong commented 3 days ago

I also see the same issue using Pulumi. Before this, I have just successfully upgraded EKS version from 1.28 to 1.29, and from 1.29 to 1.30, minutes ago.

Do you want to perform this update? details
  pulumi:pulumi:Stack: (same)
    [urn=urn:pulumi:deploy-infra::presto-deploy-infra::pulumi:pulumi:Stack::presto-deploy-infra-deploy-infra]
    > aws-native:route53:HostedZone: (read)
        [id=Z03396742SLWB6XWEZO6M]
        [urn=urn:pulumi:deploy-infra::presto-deploy-infra::aws-native:route53:HostedZone::existing-hosted-zone]
        [provider=urn:pulumi:deploy-infra::presto-deploy-infra::pulumi:providers:aws-native::default_0_99_0::2ccb8690-9950-410a-9c9b-4ded3bbc5f73]
        ~ aws:eks/cluster:Cluster: (update)
            [id=deploy-infra-eksCluster-c1c221f]
            [urn=urn:pulumi:deploy-infra::presto-deploy-infra::eks:index:Cluster$aws:eks/cluster:Cluster::deploy-infra-eksCluster]
            [provider=urn:pulumi:deploy-infra::presto-deploy-infra::pulumi:providers:aws::default_5_31_0::cf7ef067-21a9-4432-9031-6becd1e697f7]
          ~ version: "1.30" => "1.31"

Do you want to perform this update? yes
Updating (deploy-infra)

warning: Your Pulumi organization is on an expired trial or a canceled subscription. Update your subscription to maintain organization access: https://app.pulumi.com/ibm-data-ai/settings/billing-usage

View in Browser (Ctrl+O): https://app.pulumi.com/ibm-data-ai/presto-deploy-infra/deploy-infra/updates/155

     Type                   Name                              Status                  Info
     pulumi:pulumi:Stack    presto-deploy-infra-deploy-infra  **failed**              2 errors
     └─ eks:index:Cluster   deploy-infra
 ~      └─ aws:eks:Cluster  deploy-infra-eksCluster           **updating failed**     [diff: ~version]; 1 error

Diagnostics:
  pulumi:pulumi:Stack (presto-deploy-infra-deploy-infra):
    error: update failed
    error: eks:index:Cluster resource 'deploy-infra' has a problem: grpc: the client connection is closing

  aws:eks:Cluster (deploy-infra-eksCluster):
    error: 1 error occurred:
        * updating urn:pulumi:deploy-infra::presto-deploy-infra::eks:index:Cluster$aws:eks/cluster:Cluster::deploy-infra-eksCluster: 1 error occurred:
        * updating EKS Cluster (deploy-infra-eksCluster-c1c221f) version: InvalidParameterException: Cluster has incorrect Identity Provider URL configuration. The Identity Provider URL cannot be the same as the OpenID Connect (OIDC) issuer URL. Please fix the Identity Provider configuration before updating the cluster.
    {
      RespMetadata: {
        StatusCode: 400,
        RequestID: "f9a2abc5-192d-4ddc-bab2-bf2d9a800b23"
      },
      Message_: "Cluster has incorrect Identity Provider URL configuration. The Identity Provider URL cannot be the same as the OpenID Connect (OIDC) issuer URL. Please fix the Identity Provider configuration before updating the cluster."
    }
paulkiernan commented 1 day ago

Having the same issue here.