Closed theherk closed 2 months ago
This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days
This issue isn't stale aside from its resolution awaiting review.
This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days
Same as before. The resolution is awaiting review.
This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days
Still alive and awaiting feedback.
I'd say this is worse than that... If you have engine_version
set and have auto_minor_version_upgrade
set to true
or even unset (null
), if the cluster get's un update, the terraform code becomes inconsistent, because a cluster can't be downgraded.
I'd like a way of having an ignore_changes
block only when auto_minor_version_upgrade
is not false
This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days
This issue was automatically closed because of stale in 10 days
I'm going to lock this issue because it has been closed for 30 days β³. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Description
There is an issue with global database clusters that is documented in the provider but not yet accounted for in the module. It only appears when both using global clusters and when upgrading to a new engine version. And even then... not always; it is inconsistent.
Given an implementation:
If the variable
engine_version
is changed to upgrade the cluster, we usually get the error given further down.Set aside that this isn't the latest version. I have worked with that as well, and will. The issue, I believe, is here. This needs
engine_version
to be ignored in the case of global clusters. However, since dynamic lifecycle blocks are not supported, the change I'm proposing is to have bothaws_rds_cluster.this
andaws_rds_cluster.this_ignore_engine_version
. Then in the locations that reference this resource, add a ternary to select the correct instance of the resource.What are your thoughts, @antonbabenko? Maybe there is a more simple workaround I'm overlooking.
Versions
~> 7.1.0"
and9.0.0
1.5.7
and1.6.6
Expected behavior
I expect this to happen given the note in the provider. What I would expect given the proposed change is that the isn't an inconsistent plan and all upgrades go well.
Actual behavior
The error is given as documented in the provider.
This is because when upgrading a global cluster, AWS upgrades the members, then when terraform attempts to upgrade the member, if that happens in a certain order the state of the member is not the same as in the state. Ignoring the change to
engine_version
in the member cluster would avoid the issues.