Closed silasdavis closed 3 months ago
@silasdavis Apologies that you're facing the described issue, and thanks for the details and screenshots. There is a paragraph in the link that describes how to resolve this, specifically:
In both cases, you are likely to want to configure IAM roles for your worker nodes explicitly, which can besupplied to your EKS cluster using the instanceRole or instanceRoles
The reason why the nodegroups are not attached is because the AWS auth configmap in the EKS cluster is not updated to contain the nodegroup role you created for the NodeGroupV2
resource. This prevents the nodegroups from communicating with the EKS cluster's API Server. A fix for this would be to update the cluster resource, for example:
const cluster = new eks.Cluster(stackName, {
createOidcProvider: true,
skipDefaultNodeGroup: true,
instanceRoles: [nodeGroupRole], // <- the required change here
});
Thanks for pointing out the issue with the code example in the provided link. I'll add this to our backlog to update!
Ah I was coming back here to say that, having looked at some examples in this repo. I did think it was a bit odd to have that undefined in there given that it would almost certainly be interpreted the same as omission by most sane libraries, but I thought if I copied it verbatim it ought to work.
I think I probably skimmed:
are likely to want to configure IAM roles for your worker nodes explicitly
But ignored it. It makes it sound like if you don't bother with the roles things might work automagically.
Is it not in fact mandatory to define these roles if you want the node group to attach to the cluster?
If you're using AWS Auth configmap for controlling access to clusters, then yes, it is mandatory to define these roles within the instanceRoles
field of the cluster to ensure the nodegroups are properly attached.
Note that as of v2.6.0
of the EKS provider, we now support EKS clusters using Access Entries for authentication instead. This is a migration guide we have about migrating from configmaps to Access Entries. In the new Access Entries method, you should be able to just create the nodegroups and roles without updating the eks.Cluster
resource and it should just automagiacally work.
What happened?
Following the part of this guide that describes how to manually define a node group via
NodeGroupV2
: https://www.pulumi.com/docs/clouds/aws/guides/eks/ I find that my cluster gets created but no node groups show up in the theCompute
tab of the AWS console:Example
I am running the following Pulumi program:
Output of
pulumi about
CLI
Version 3.120.0 Go Version go1.22.4 Go Compiler gc
Host
OS nixos Version 24.11 (Vicuna) Arch x86_64
Backend
Name walter URL s3://inco-pulumi-state?region=eu-north-1 User silas Organizations
Token type personal
Pulumi locates its logs in /tmp by default
Additional context
View from the top:
This is after waiting for the EC2 instances to be up and pass their status checks:
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).