Closed kastlbo closed 11 months ago
Voting for Prioritization
Volunteering to Work on This Issue
Is there any update on this issue? It is still happening for newer engine version updates
There seems to be a glich when upgrading you clusters with terraform. I found that if you have an instance declartion in your terraform you must leave that version as the previous version and only update the main cluster resource. It needs to have the instances available to upgrade the cluster. So if you leave the instances as the old version number it will work. Once upgraded it will automatically update the instances. Also, will upgrading to newer versions of neptune the parameter groups change as well. It wasn't fun trying to upgrade to 1.2 from 1.04.1. You have to step up to 1.1.0 and then when you go to 1.2 the parameter group changes. I also found an issue with the size of cluster some sizes where removed in never versions. Enjoy
I actually see what the issue here is. Neptune engine versions are applied at a cluster level and all instances within the cluster get upgraded at the same time. The logic in the Terraform AWS Provider is attempting to replace instances if an engine_version
is specified per aws_neptune_cluster_instance
resource. Ultimately, the engine_version
parameter should be ignored when applied to an instance, as you cannot have a Neptune cluster with instances on different engine versions. If you comment out the engine_version
parameter on the aws_neptune_cluster_instance
resources, the engine upgrade gets applied without replacing the instances and the instances are restarted with the specified engine_version
applied at the aws_neptune_cluster
resource, as expected.
This functionality has been released in v5.22.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Terraform Core Version
1.3.8
AWS Provider Version
4.55.0
Affected Resource(s)
aws_neptune_cluster
Expected Behavior
I am trying to upgrade the neptune cluster version from 1.1.1.0 to 1.2.0.0.
I am expecting it to remove the instances and upgrade the cluster to the newer version and then recreate the instances.
Actual Behavior
It is removing the instances but not upgrading the cluster. I get the following error.
InvalidDBClusterStateFault: Cannot modify engine version without a healthy primary instance in DB cluster:
I previously had the same problem when upgrading from 1.0.4.1 to 1.1.1.0. You fixed this with provider 4.55.0 The previous issue was r/neptune_cluster - fix major version upgrade #28051 Looks like the same thing is happening when you try to upgrade from 1.1.1.0 to 1.2.0.0
Relevant Error/Panic Output Snippet
No response
Terraform Configuration Files
n/a
Steps to Reproduce
n/a
Debug Output
No response
Panic Output
No response
Important Factoids
No response
References
No response
Would you like to implement a fix?
None