aws-ia / terraform-aws-eks-blueprints-addons

Terraform module which provisions addons on Amazon EKS clusters
https://aws-ia.github.io/terraform-aws-eks-blueprints-addons/main/
Apache License 2.0
272 stars 127 forks source link

[Karpenter] AWS EKS Access Entry for Karpenter role #389

Open LucasRejanio opened 7 months ago

LucasRejanio commented 7 months ago

Description

I am creating a cluster EKS using the official aws module, and installing addons and tools using eks-blueprints-addons. So ok! Everything was going well, but when I needed to test the Karpenter it wasn't working correctly.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

Reproduction Code [Required]

Steps to reproduce the behavior:

Hmm, I just created my cluster with node group and tried running Karpenter with the below configuration (in additional context). I'm not using local cache or workspace either.

Expected behaviour

Karpenter is installed correctly by the module, I was able to view and test it by scaling new nodes. These nodes must be associated in my cluster node group for my new resources and applications.

Actual behaviour

Karpenter is installed correctly by the module, I was able to view and test it by scaling new nodes. But he can't add these new instances to the node group. This is happening due to the lack of an access entry for Karpenter role.

Soluction

Me and my team resolved this problem using aws_eks_access_entry resource. Example:

resource "aws_eks_access_entry" "karpenter" {
  cluster_name  = module.eks.cluster_name
  principal_arn = module.eks_blueprints_addons.karpenter.node_iam_role_arn
  tags          = local.tags
  type          = "EC2_LINUX"
}

Terminal Output Screenshot(s)

Additional context

Kapenter configuration:

  enable_karpenter                           = true
  karpenter_enable_spot_termination          = true
  karpenter_enable_instance_profile_creation = true
  karpenter_sqs                              = true
  karpenter_node = {
    iam_role_use_name_prefix = false
  }
  karpenter = {
    set = [
      {
        name  = "clusterName"
        value = module.eks.cluster_name
      },
      {
        name  = "clusterEndpoint"
        value = module.eks.cluster_endpoint
      },
      {
        name  = "controller.resources.requests.cpu"
        value = "1"
      },
      {
        name  = "controller.resources.requests.memory"
        value = "1Gi"
      },
      {
        name  = "controller.resources.limits.cpu"
        value = "1"
      },
      {
        name  = "controller.resources.limits.memory"
        value = "1Gi"
      },
    ]
  }
askulkarni2 commented 7 months ago

Assuming that you are not providing the aws_auth_roles config map in your EKS config this is expected behavior. See our karpenter blueprint where we show that you have to provide the aws_eks_access_entry resource.

We are looking at how we can improve the user experience for this module and may incorporate this in our next milestone release.

LucasRejanio commented 7 months ago

@askulkarni2 Thanks for your response. I believe we can improve this dependence. I'm happy to contribute to the project in terms of user experience ;)

lieberlois commented 12 hours ago

@askulkarni2 What do you think about adding this optionally? @Christoph-Raab and I implemented it like this, and were a bit surprised not to see it in the upstream blueprints - esp. since it is part of the official Karpenter module.

I think we could just add something like this:

resource "aws_eks_access_entry" "node" {
  count = var.karpenter_enable_access_entry ? 1 : 0

  cluster_name  = "..."
  principal_arn = "..."

  type = "EC2_LINUX"
}