Open metral opened 5 years ago
This is a little surprising. Pulumi does create before delete by defat so a new launch configuration should have been created and the Autoscaling group should have been updated to use it prior to attempting to delete the previous launch configuration.
Could you share a full output of an update that attempts to make this change?
Per https://github.com/terraform-providers/terraform-provider-aws/issues/8485#issuecomment-507299533, this was fixed in https://github.com/terraform-providers/terraform-provider-aws/pull/7819 and available in 2.1.0 of tf-aws. We're currently on 2.12.0 but seem to be still hitting this bug.
/cc @jen20 @stack72
@metral Does this reliably repro?
I have not been able to repro this. It could have been due to a rabbit hole I was in the middle of. Closing this out for now, and I'll re-open if necessary.
I'm still seeing this.
Terraform seems to have some problems as well: https://github.com/terraform-providers/terraform-provider-aws/issues/8485
EDIT: I'm facing this issue, because I changed some VPC configs, which forced Pulumi to recreate the EKS cluster.
I think I found a way to reproduce this:
autoscaling:TerminateInstanceInAutoScalingGroup
pulumi up
which results in cloud formation stack template change ([diff: ~templateBody]
) and it'll fail with not having the permission
pulumi up
again and it will fail with Cannot delete launch configuration ..., because it is attached to AutoScalingGroup ...
as there is a aws:ec2:LaunchConfiguration ... completing deletion from previous update
For what it's worth, I encounter this any time internal changes in new eks.Cluster(...)
cause Pulumi to attempt to change the launch configuration.
It seems like it's still an issue for me as well... Just like @zebulonj , I'm using eks.Cluster
without much extra configuration. Eventually something amongst the objects it created produces a diff (an AMI id in my case), and then it's starting to fail with
... ResourceInUse: Cannot delete launch configuration cubeapp-eu-central-1-2-primary-ng-nodeLaunchConfiguration-8e54547 because it is attached to AutoScalingGroup cubeapp-eu-central-1-2-primary-ng-55bb153a-NodeGroup-1I26U3PIGT5T0 ...
What would be the best manual workaround for this? Thanks!
Seeing this every time I try to do a change to my EKS
Hi, any update on this? I'm having the same issue.
Reopened the issue and added to triage queue for next iteration.
I consider rewriting the pulumi_eks stuff to plain pulumi_aws to work around this. Then I will have finer control of when the launch configuration needs to be recreated - which is just about never as we use SpotInst for al that.
We have had this issue as well. Any updates please?
Also having this issue :(
It hyas been a long time.... any news? Is this on the backlog?
Are there any recommended work-arounds?
When this happens, I usually go to the parent autoscaling group in the AWS console and change the link to the launch configuration here. And then re-run the Pulumi job with a refesh...
Not so good, but the best I have seen so far.
Same issue here, I want to adjust the nodeSubnetIds
.
I have confirmed that the workaround @tma-unwire provided is workable, also thanks to the pulumi tech team support.
When you encounter the error, here are the steps:
pulumi refresh
.pulumi up
again, you should get rid of the error.@roothorp please can we try and recreate this issue so that we can isolate what we will need to fix here :)
I have confirmed that the workaround @tma-unwire provided is workable, also thanks to the pulumi tech team support.
When you encounter the error, here is the step:
1. Login to the aws console, you should see the new launch configuration is already created. 2. Edit the auto scaling group, associate to the new launch config. 3. Back to the pulumi, and run `pulumi refresh`. 4. Run `pulumi up` again, you should get rid of the error.
I had this same issue and wanted to leave an alternative in case this is not working: In case you dont find the new launch config created, you can create a temporal one and attach it to the asg. This will let pulumi delete the old launch config on a pulumi up.
Afterwards dont forget to delete the temporal launch config
We have the same issue, Pulumi create the new LaunchConfiguration then tries to delete the old before replacing with the new, so it fails.
Same issue? Any plan to fix it?
This is still failing.
This also hit me, it's still an issue.
When a NodeGroup is stood up with a given instance type, e.g.
t2.medium
, and then on a future update is changed to sayt3.large
, results in the following error:See:
pulumi/eks
as we do not exposenamePrefix
as an opt inaws.ec2.LaunchConfiguration
name
of the LaunchConfig resulted in the same errorManual clean up of the LaunchConfig in the state snapshot and AWS seems to be the only mitigation I've found.
cc @jen20 @lukehoban