Closed SohamChakraborty closed 1 year ago
I created the role manually and it has plenty more permissions than the role my tf coe generated. I think that's the problem that not all permissions are being created here.
you need to tell the IRSA module which permissions to add, you have commented out # vpc_cni_enable_ipv6 = true
and I don't see one for vpc_cni_enable_ipv4 = true
so it looks like the role doesn't have any permissions. Add the permissions and the issue should be resolved
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Description
I am trying to spin up an EKS cluster following the documentation and it fails with this error:
I have identified the problem to be with this:
As you can see the
aws-node
pod is in CrashLoopBackOff state. Looking into the pod, I see the following error which is likely the reason:Versions
Module version [Required]: version 19.0
Terraform version:
Provider version(s):
Reproduction Code [Required]
Steps to reproduce the behavior:
No Yes Terraform init, terraform plan, terraform apply ## Expected behavior The cluster should be created without error ## Actual behavior Getting ``` ╷ │ Error: unexpected EKS Add-On (eks-managed-nodegroup:coredns) state returned during creation: timeout while waiting for state to become 'ACTIVE' (last state: 'CREATING', timeout: 20m0s) │ [WARNING] Running terraform apply again will remove the kubernetes add-on and attempt to create it again effectively purging previous add-on configuration │ │ with module.eks.aws_eks_addon.this["coredns"], │ on .