Open jw-maynard opened 2 years ago
I get the same issue with the datadog-agent helm chart.
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.datadog_agent.helm_release.this[0] to
│ include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/helm" produced an invalid new value for
│ .manifest: was
...
│
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
I did some digging into my issue today and was able to diff the Plan manifests against the Apply manifests to find out what's inconsistent. In my case the aws-load-balancer-controller chart will generate a cert bundle for it's webhook when you run the chart. It looks like the helm is being run during the plan and then again during apply and since it's run twice these auto generated certs are being created twice and therefore have different values which causes the inconstancy.
@BBBmau @jrhouston I don't know if this is feasible, but maybe the plan step could store a complete copy of the Helm values in the plan and then during apply the provider would feed a full set of values into Helm. Also not sure if this strategy would have undesirable side effects in helm.
Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!
This does seem to be connected to the experimental manifest feature when using helm charts that have some random generation for output values. In addition to the same aws-load-balancer-controller chart generating differing cert bundles, I also ran into this with the grafana/grafana chart and using the basic auth. It would generate a random password on each run of the chart which would differ when the command is run again. It seems like the manifest feature's output would need to be passed on to helm wholesale similar to a terraform apply "plan.output"
command
I ran into this with another chart using the randNumeric
function, but I replaced it with the now
function and got the same error.
At this point I'm pretty sure it's impossible to combine the experiments { manifest = true } feature and any helm function that produces different outputs for two subsequent plans.
This is, in my opinion, a major drawback of the provider at the moment. It should at least be mentioned in the documentation of this experimental feature.
On the other hand, I don't know how anyone would use the helm provider without the experimental feature enabled, since you can never tell from the plan what will actually change in your cluster.
I think the information about this issue is a bit scattered over a lot of issues and mixed with other (already solved) issues with similar symptoms.
I hope this issue gets some attention from the maintainers, as it is a showstopper for many use cases.
Terraform, Provider, Kubernetes and Helm Versions
Affected Resource(s)
Terraform Configuration Files
Debug Output
NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.
Panic Output
Steps to Reproduce
terraform plan
terraform apply
Expected Behavior
Currently in our config there are no changes that should need to be applied so the plan should show no changes
Actual Behavior
Many changes are shown and when applied an error occurs:
The string value (with some data redacted) is below:
Important Factoids
References
523
Community Note