Open vdmkenny opened 4 years ago
Hey @vdmkenny 👋 Thank you for taking the time to file this! Given that there's been a number of releases of the AWS Provider since you initially filed it, can you confirm if you're still experiencing this behavior?
Hello @justinretzolk , I'd love to try to reproduce this, but since it's been a year and a half since I filed this bug, I'm no longer working at the same client, and no longer have access to this or a similar environment to try it out, sorry!
I can say that the bug was present until the end of working there (April 2021) though very sporadic, and we were always on the latest version of terraform and the aws provider.
Another tidbit of information: We were using S3 to store the tf state.
Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.
If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!
Community Note
Terraform Version
Terraform v0.12.24
Affected Resource(s)
Terraform Configuration Files
Debug Output
Expected Behavior
The new launch configuration and autoscaling group would be created, and the old ones would be destroyed.
Actual Behavior
Sometimes, the launch configuration is created, but the autoscaling group is not. The old autoscaling group is still present and running.
Sometimes the new autoscaling groups is created, but the old one is not destroyed, causing a duplicate, older versioned, application to keep running.
Steps to Reproduce
I have not found the pattern for when this happens. It happens sometimes with above config.
terraform apply
Important Factoids
The terraform apply is running in a gitlab pipeline