clowdhaus / terraform-aws-eks-migrate-v19-to-v20

What it says on the tin
5 stars 2 forks source link

Migrate EKS module on aws provider 5.59.0 #7

Open Mieszko96 opened 2 months ago

Mieszko96 commented 2 months ago

Describe the bug Hey i was testing upgrade procedure on aws provider 5.57.0 and it worked more or less fine. Only i needed run terraform apply 3 times 19.21 -> migrrate migrate -> 20.00 but access_entires were created but policy was not applied 20.00 -> 20.00 without changed was adding this policy

it was more or less fine, but i needed to change priority to diffrent subject and in meantime we upgraded aws provider to 5.59.0 and this migrate not works for me anymore.

from upgrading 19.21 -> migrate module i'm getting error in terraform plan

│ 
│   with helm_release.cert_manager,
│   on cert_manager.tf line 8, in resource "helm_release" "cert_manager":
│    8: resource "helm_release" "cert_manager" {
│ 
╵
╷
│ Error: Get "http://localhost/api/v1/namespaces/velero": dial tcp [::1]:80: connect: connection refused
│ 
│   with kubernetes_namespace.velero,
│   on velero.tf line 65, in resource "kubernetes_namespace" "velero":
│   65: resource "kubernetes_namespace" "velero" {
module.eks.aws_eks_cluster.this[0] must be replaced
+/- resource "aws_eks_cluster" "this" {
      ~ arn                           = "test" -> (known after apply)
      ~ certificate_authority         = [
          - {
              - data = "hided"
            },
        ] -> (known after apply)
      + cluster_id                    = (known after apply)
      ~ created_at                    = "2024-07-31 08:59:06.64 +0000 UTC" -> (known after apply)
      - enabled_cluster_log_types     = [] -> null
      ~ endpoint                      = "test" -> (known after apply)
      ~ id                            = "test" -> (known after apply)
      ~ identity                      = [
          - {
              - oidc = [
                  - {
                      - issuer = "test"
                    },
                ]
            },
        ] -> (known after apply)
        name                          = "test"
      ~ platform_version              = "eks.16" -> (known after apply)
      ~ status                        = "ACTIVE" -> (known after apply)
      ~ tags                          = {
          + "terraform-aws-modules" = "eks"
        }
      ~ tags_all                      = {
          + "terraform-aws-modules" = "eks"
            # (10 unchanged elements hidden)
        }
        # (3 unchanged attributes hidden)

      ~ access_config {
          ~ authentication_mode                         = "CONFIG_MAP" -> "API_AND_CONFIG_MAP"
          ~ bootstrap_cluster_creator_admin_permissions = true -> false # forces replacement
        }

specially this

      ~ access_config {
          ~ authentication_mode                         = "CONFIG_MAP" -> "API_AND_CONFIG_MAP"
          ~ bootstrap_cluster_creator_admin_permissions = true -> false # forces replacement
        }

To Reproduce

  1. install module 19.21 using aws provider 5.59.0
  2. update EKS module to source = "github.com/clowdhaus/terraform-aws-eks-v20-migrate.git?ref=3f626cc493606881f38684fc366688c36571c5c5"
  3. run terraform init/plan
Mieszko96 commented 2 months ago

and it's only when cluster was initially created on aws version 5.58.0 or higher.

my pov all my important clusters were created before, so no problem on my side, but i still think this module needs some upgrade. IF it's possible cuz in aws provider they added this

bootstrap_cluster_creator_admin_permissions set to true as default not false as it was before

bryantbiggs commented 2 months ago

bootstrap_cluster_creator_admin_permissions set to true as default not false as it was before

bootstrap_cluster_creator_admin_permissions is not available on version v19.21 or anything less than v20.0 of the EKS module, so I'm not following what this issue is describing

Mieszko96 commented 2 months ago

https://github.com/hashicorp/terraform-provider-aws/pull/38295 this PR changed default values for brand new cluster. and because of it when using your module to migrate cluster, it wants to recreate cluster.

access_config {
          ~ authentication_mode                         = "CONFIG_MAP" -> "API_AND_CONFIG_MAP"
          ~ bootstrap_cluster_creator_admin_permissions = true -> false # forces replacement
        }

if you don't believe me.

  1. create cluster from scratch using EKS module 19.21 and aws provider 5.58.0 or higher
  2. use you migration procedure

Not sure if it can be fixed somehow in this repo or in aws provider only

viachaslauka commented 2 weeks ago

We have the same behaviour. Clusters were provisioned before v5.58.0, later provider's version was upgraded (currently it is v5.68.0), and now updating to v20.x causes cluster to be re-created due to hardcoded bootstrap_cluster_creator_admin_permissions

       ~ access_config {
          ~ bootstrap_cluster_creator_admin_permissions = true -> false # forces replacement