Adding a ManagedNodeGroup to a cluster with API authentication mode fails because there's verification logic that expects the role for the EC2 instances to be present in the instanceRoles list of the cluster:
pulumi:pulumi:Stack custom-managed-nodegroup-p-it-florians-m-custom-man-521e574c **failed** 1 error; 2 warnings; 2 messages
Diagnostics:
pulumi:pulumi:Stack (custom-managed-nodegroup-p-it-florians-m-custom-man-521e574c):
warning: using pulumi-resource-eks from $PATH at /Users/flo/development/pulumi-eks/bin/pulumi-resource-eks
error: Running program '/private/var/folders/qb/q9rbqmxn1jqdfps720v2s9h80000gn/T/p-it-florians-m-custom-man-521e574c-1639103563/' failed with an unhandled exception:
Error: A managed node group cannot be created without first setting its role in the cluster's instanceRoles
If the necessary authentication configuration for the EC2 instances was added as access entries, this verification will fail.
In case the cluster supports access entries, we shouldn't execute this check at all. Those entries can now be added out of band in contrary to the legacy aws-auth ConfigMap approach; making this check obsolete in that case.
Example
const managedPolicyArns: string[] = [
"arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy",
"arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy",
"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
];
// Creates a role and attches the EKS worker node IAM managed policies
function createRole(name: string): aws.iam.Role {
const role = new aws.iam.Role(name, {
assumeRolePolicy: aws.iam.assumeRolePolicyForPrincipal({
Service: "ec2.amazonaws.com",
}),
});
let counter = 0;
for (const policy of managedPolicyArns) {
// Create RolePolicyAttachment without returning it.
const rpa = new aws.iam.RolePolicyAttachment(`${name}-policy-${counter++}`,
{ policyArn: policy, role: role },
);
}
return role;
}
// IAM roles for the node groups.
const instanceRole = iam.createRole("example-instance-role");
// Create a new VPC
const eksVpc = new awsx.ec2.Vpc("eks-vpc", {
enableDnsHostnames: true,
cidrBlock: "10.0.0.0/16",
});
// Create an EKS cluster.
const cluster = new eks.Cluster("example-managed-nodegroup", {
skipDefaultNodeGroup: true,
deployDashboard: false,
vpcId: eksVpc.vpcId,
// Public subnets will be used for load balancers
publicSubnetIds: eksVpc.publicSubnetIds,
// Private subnets will be used for cluster nodes
privateSubnetIds: eksVpc.privateSubnetIds,
authenticationMode: eks.AuthenticationMode.API,
accessEntries: {
instanceRole: {
principalArn: instanceRole.arn,
type: eks.AccessEntryType.EC2_LINUX,
}
}
});
// Export the cluster's kubeconfig.
export const kubeconfig = cluster.kubeconfig;
const ami = pulumi.interpolate`/aws/service/eks/optimized-ami/${cluster.core.cluster.version}/amazon-linux-2/recommended/image_id`.apply(name =>
aws.ssm.getParameter({ name }, { async: true })
).apply(result => result.value);
const launchTemplate = new aws.ec2.LaunchTemplate("managed-ng-launchTemplate",
{
blockDeviceMappings: [
{
deviceName: "/dev/xvda",
ebs: {
volumeSize: 20,
volumeType: "gp3",
deleteOnTermination: "true",
encrypted: "true",
},
},
],
userData: userdata.createUserData(cluster.core.cluster, "--kubelet-extra-args --node-labels=mylabel=myvalue"),
metadataOptions: { httpTokens: "required", httpPutResponseHopLimit: 2, httpEndpoint: "enabled" },
imageId: ami
},
);
export const launchTemplateName = launchTemplate.name;
// Create a simple AWS managed node group using a cluster as input and the
// refactored API.
const managedNodeGroup = eks.createManagedNodeGroup("example-managed-ng", {
cluster: cluster,
nodeRole: instanceRole,
launchTemplate: {
id: launchTemplate.id,
version: pulumi.interpolate`${launchTemplate.latestVersion}`,
}
});
Output of pulumi about
n/a
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
What happened?
Adding a
ManagedNodeGroup
to a cluster withAPI
authentication mode fails because there's verification logic that expects the role for the EC2 instances to be present in theinstanceRoles
list of the cluster:If the necessary authentication configuration for the EC2 instances was added as access entries, this verification will fail. In case the cluster supports access entries, we shouldn't execute this check at all. Those entries can now be added out of band in contrary to the legacy aws-auth ConfigMap approach; making this check obsolete in that case.
Example
Output of
pulumi about
n/a
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).