Open LucasRejanio opened 7 months ago
Assuming that you are not providing the aws_auth_roles
config map in your EKS config this is expected behavior. See our karpenter blueprint where we show that you have to provide the aws_eks_access_entry
resource.
We are looking at how we can improve the user experience for this module and may incorporate this in our next milestone release.
@askulkarni2 Thanks for your response. I believe we can improve this dependence. I'm happy to contribute to the project in terms of user experience ;)
@askulkarni2 What do you think about adding this optionally? @Christoph-Raab and I implemented it like this, and were a bit surprised not to see it in the upstream blueprints - esp. since it is part of the official Karpenter module.
I think we could just add something like this:
resource "aws_eks_access_entry" "node" {
count = var.karpenter_enable_access_entry ? 1 : 0
cluster_name = "..."
principal_arn = "..."
type = "EC2_LINUX"
}
Description
I am creating a cluster EKS using the official aws module, and installing addons and tools using
eks-blueprints-addons
. So ok! Everything was going well, but when I needed to test the Karpenter it wasn't working correctly.⚠️ Note
Before you submit an issue, please perform the following first:
.terraform
directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!):rm -rf .terraform/
terraform init
Versions
Module version [Required]: 1.16.2
Terraform version: >= 1.4.1
Provider version(s): >= 1.4.1
Reproduction Code [Required]
Steps to reproduce the behavior:
Hmm, I just created my cluster with node group and tried running Karpenter with the below configuration (in additional context). I'm not using local cache or workspace either.
Expected behaviour
Karpenter is installed correctly by the module, I was able to view and test it by scaling new nodes. These nodes must be associated in my cluster node group for my new resources and applications.
Actual behaviour
Karpenter is installed correctly by the module, I was able to view and test it by scaling new nodes. But he can't add these new instances to the node group. This is happening due to the lack of an access entry for Karpenter role.
Soluction
Me and my team resolved this problem using
aws_eks_access_entry
resource. Example:Terminal Output Screenshot(s)
Additional context
Kapenter configuration: