Closed philipp-paland closed 4 years ago
@therealppa very much appreciate the sample main.tfs and issue. We'll need to find a way to basically "unset" the value for backing_provider_name as terraform saves it in it's state but once you move beyond an M2/M5 it isn't supported and so the apply fails. @PacoDw, @marinsalinas let me know if you have any questions on this bug.
Thank you @themantissa for the explanation, let me take a look on this.
IMO, the forcenew is not a good solution and is work around. Is there a plan to fix it so that it doesn't do that?
For that, we have to ask API's developers if is there a plan to fix that, because it's now allowed to PATCH provider_name attribute, that's why we decide to use force new.
Closed it with this workaround because it is more like an API issue.
cc @themantissa
You can patch it to work. The providersettings attribute should be set it "" and it will get omitted if empty. Thats assuming I read the sdk right.
Upgrade from M2 to M10 fails
It does not seem to be possible to upgrade an M2 cluster to an M10 cluster even though it's possible via the Atlas UI (and I assume via the API).
I have attached two tf files that show the cluster definitions before and after. Both can be used to create new clusters without any error.
To reproduce, use the following commands:
This fails with the error message "400 (request "Bad Request") Invalid attribute backingProviderName specified." which leads me to speculate that even though backing_provider_name is not set in the main.tf.2 it somehow gets transferred to the atlas API.
If I do instead
it works but (of course) the data in the cluster is lost.
main.tf.zip