Closed automagic closed 3 months ago
+1 this is very limiting.
Thanks for reporting this @automagic! I'm sorry you're running into this, I'll start digging into it right away.
@automagic I tried reproducing it in both typescript and python, but running this doesn't trigger a replacement for me. After the first pulumi up
I uncommented the line about authenticationMode
and no changes where shown on the following pulumi up
runs.
Could you provide an example of what triggers this behavior for you?
I actually managed to reproduce it now. This occurs when a cluster was created with a version before v2.7.4 and then the authenticationMode
is changed like mentioned above with v2.7.4+. The root cause is the bi-modal behavior in upstream I mentioned here: https://github.com/pulumi/pulumi-aws/issues/3997#issuecomment-2223201333.
As a workaround you can use transformations to ignore changes to the bootstrapClusterCreatorAdminPermissions
parameter. Example for python:
import pulumi
import pulumi_eks as eks
def transform(args: pulumi.ResourceTransformArgs):
if args.type_ == "aws:eks/cluster:Cluster":
return pulumi.ResourceTransformResult(
props=args.props,
opts=pulumi.ResourceOptions.merge(args.opts, pulumi.ResourceOptions(
ignore_changes=["accessConfig.bootstrapClusterCreatorAdminPermissions"],
)))
# Create an EKS cluster with the default configuration.
cluster1 = eks.Cluster(f"auth-mode-migration", skip_default_node_group=True,
authentication_mode="CONFIG_MAP", opts=pulumi.ResourceOptions(transforms=[transform])
)
I'll check if we can solve this issue in a similar way by ignoring changes to the bootstrapClusterCreatorAdminPermissions
parameter in the provider itself. This parameter can only be set during cluster creation so ignoring changes to it should be fine.
I actually managed to reproduce it now. This occurs when a cluster was created with a version before v2.7.4 and then the
authenticationMode
is changed like mentioned above with v2.7.4+. The root cause is the bi-modal behavior in upstream I mentioned here: pulumi/pulumi-aws#3997 (comment).As a workaround you can use transformations to ignore changes to the
bootstrapClusterCreatorAdminPermissions
parameter. Example for python:import pulumi import pulumi_eks as eks def transform(args: pulumi.ResourceTransformArgs): if args.type_ == "aws:eks/cluster:Cluster": return pulumi.ResourceTransformResult( props=args.props, opts=pulumi.ResourceOptions.merge(args.opts, pulumi.ResourceOptions( ignore_changes=["accessConfig.bootstrapClusterCreatorAdminPermissions"], ))) # Create an EKS cluster with the default configuration. cluster1 = eks.Cluster(f"auth-mode-migration", skip_default_node_group=True, authentication_mode="CONFIG_MAP", opts=pulumi.ResourceOptions(transforms=[transform]) )
I'll check if we can solve this issue in a similar way by ignoring changes to the
bootstrapClusterCreatorAdminPermissions
parameter in the provider itself. This parameter can only be set during cluster creation so ignoring changes to it should be fine.
Thanks so much @flostadler , much appreciated!
What happened?
In recent pulumi-eks versions, there's a new parameter called 'authentication_mode', which is documented here: https://www.pulumi.com/registry/packages/eks/api-docs/cluster/#authentication_mode_python
We're using pulumi-eks as it's a nice abstraction for eks components. However, when setting the 'authentication_mode' to 'CONFIG_MAP' (which is what the cluster was already using), this triggers a cluster replacement.
As turns out, once you set 'authentication_mode' to anything other than 'None' - this modifies the 'bootstrapClusterCreatorAdminPermissions' attribute of the underlying aws.eks.Cluster resource.
I referred to this issue: https://github.com/pulumi/pulumi-aws/issues/3997 which suggested a fix.
However, using the pulumi-eks package, there's no way to control the 'access_config' attribute as it belongs to an upstream resource.
I'd really like to use 'authentication_mode', but can't have my clusters break over it
Example
The field that triggers the replacement is:
Output of
pulumi about
pulumi==3.128.0 pulumi_aws==6.48.0 pulumi_awsx==2.13.0 pulumi_docker==4.5.5 pulumi_eks==2.7.7 pulumi_kubernetes==4.15.0
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).