Open Roberdvs opened 4 months ago
EDIT: Never mind, this label is part of my specific chart's template, it is just using .Release.Revision
and helm computes the templates with the revision incremented for diffs.
I can confirm this is the case on 2.14.0, terraform 1.9.2, though I'm seeing a diff on the helm-revision
label. Definitely seems to be affecting OCI charts, specifically my resource definition was:
resource "helm_release" "volsync" {
create_namespace = true
atomic = true
cleanup_on_fail = true
chart = "oci://tccr.io/truecharts/volsync"
name = "volsync"
namespace = "volsync"
version = "2.2.0"
values = [
yamlencode({
metrics = {
main = {
enabled = false
}
}
})
]
depends_on = [helm_release.snapshot_controller]
}
Looking through the terraform state, it seems that on OCI charts the state with manifests stores the helm-revision label for each resource, but non-OCI charts seem not to store it, which I assume leads to the state desync.
Actually, I just tested this with the helm CLI and it seems that the issue stems from there, in a dry-run upgrade of the OCI chart, resources all have the helm-revision label (incremented from the live value by one), but the non-OCI chart does not show the helm-revision label at all. So this may actually be a core helm issue
Ive noticed this too in other truechart helm charts. They all seem to use this pattern of embedding the .Release.Revision
as a pod label, making none of them compatible with manifest
. I've also come across this type of pattern in other charts as well, seems to be a not terribly uncommon practice or perhaps one growing in popularity.
the kubernetes_manifest
provider can have similar problems when it comes to things like labels being injected onto resources post creation and has a work around with computed fields
Perhaps the same tact could be taken here where a yq style path can be used to denote fields that will always be different so they can be ignored for purposes of computing differences.
resource "helm_release" "this" {
...
computed_fields = [
"kubecost/deployment.apps/apps/v1/kubecost-cost-analyzer.spec.template.metadata.labels.helm-rollout-restarter"
]
}
Terraform, Provider, Kubernetes and Helm Versions
Affected Resource(s)
Terraform Configuration Files
Debug Output
NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.
Panic Output
Steps to Reproduce
I have seen this with Kubecost OCI chart, but might be reproducible with others
terraform apply
If the resource already exists, it shows this perpetual diff on every plan
and then crashes on apply with the error above
References
This got released on 2.13 and is probably related:
Community Note