mumoshu / terraform-provider-helmfile

Deploy Helmfile releases from Terraform
129 stars 20 forks source link

Provider produced inconsistent final plan when using oci repo #56

Open Savasw opened 3 years ago

Savasw commented 3 years ago

terraform-provider-helmfile version: v0.13.3 helmfile version: v0.138.4

Overview: Charts from oci repository, supported in helmfile v0.138.4, are exported in random directories each time during plan and apply resulting in difference in diff_output and produces following error

Error: Provider produced inconsistent final plan

When expanding the plan for helmfile_release_set.kubernetes to include new
values learned so far during apply, provider
"registry.terraform.io/mumoshu/helmfile" produced an invalid new value for
.diff_output: was cty.StringVal(...) but now cty.StringVal(...).
varunpalekar commented 3 years ago

I also get the same error when using OCI registry, I see temp folder dynamically generated which maybe the issue. For temp folder already tried to use setting HELMFILE_TEMPDIR to specific folder but it didn't work.

I tried to get diff of different helmfile diff file created by helmfile provider at time of terraform plan and terraform apply.

╰─ diff --unified .terraform/helmfile/diff-f30431993ead6f6ae49ccb850f6844514f07ec91ae2e3f00e5f52dd2ce44abd0  .terraform/helmfile/diff-0979647585c29948082d8477de2acf5194e7433cc1801e530b504c97dece5466                                  ─╯
--- .terraform/helmfile/diff-f30431993ead6f6ae49ccb850f6844514f07ec91ae2e3f00e5f52dd2ce44abd0   2021-02-17 17:01:08.291001521 +0530
+++ .terraform/helmfile/diff-0979647585c29948082d8477de2acf5194e7433cc1801e530b504c97dece5466   2021-02-17 16:53:58.677679613 +0530
@@ -54,7 +54,7 @@
 +     name: secrets

 Affected releases are:
-  authenticator-database (/tmp/265590556/cloud/authenticator/database/0.1.2/database) UPDATED
+  authenticator-database (/tmp/437811767/cloud/authenticator/database/0.1.2/database) UPDATED

 Pulling <registry>/day-zero-resource:0.4.0
 0.4.0: Pulling from <registry>/day-zero-resource
@@ -71,10 +71,10 @@
 size:    1.9 KiB
 name:    day-zero-resource
 version: 0.4.0
-Exported chart to /tmp/150352843/cloud/authenticator/day-zero-resource/0.4.0/day-zero-resource/
+Exported chart to /tmp/903921706/cloud/authenticator/day-zero-resource/0.4.0/day-zero-resource/

 Building dependency release=authenticator, chart=authenticator
-Comparing release=authenticator, chart=/tmp/150352843/cloud/authenticator/day-zero-resource/0.4.0/day-zero-resource
+Comparing release=authenticator-secret, chart=/tmp/903921706/cloud/authenticator/day-zero-resource/0.4.0/day-zero-resource
 ********************

        Release was not present in Helm.  Diff will show entire contents as new.
@@ -230,6 +230,6 @@

 Affected releases are:
   authenticator (authenticator) UPDATED
-  authenticator-secret (/tmp/150352843/cloud/authenticator-secret/day-zero-resource/0.4.0/day-zero-resource) UPDATED
+  authenticator-secret (/tmp/903921706/cloud/authenticator-secret/day-zero-resource/0.4.0/day-zero-resource) UPDATED

 Identified at least one change
Savasw commented 3 years ago

@mumoshu can you please help with this. We recently moved to aws ecr for helm charts but the provider is breaking for charts in oci repositories due to diff_output. Would you need more information? Thanks!

yashbhutwala commented 3 years ago

I also ran into this with terraform 0.14.11:

Error: Provider produced inconsistent final plan

When expanding the plan for helmfile_release_set.nginx to include new values
learned so far during apply, provider "registry.terraform.io/mumoshu/helmfile"
produced an invalid new value for .apply_output: was known, but now unknown.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Error: Provider produced inconsistent final plan

When expanding the plan for helmfile_release_set.nginx to include new values
learned so far during apply, provider "registry.terraform.io/mumoshu/helmfile"
produced an invalid new value for .diff_output: was cty.StringVal(""), but now
cty.StringVal(":\n    name: helmfile-ingress-nginx-nginx-ingress-controller\n-
namespace: default\n+   namespace: \"default\"\n    labels:\n
app.kubernetes.io/name: nginx-ingress-controller\n-     helm.sh/chart:
nginx-ingress-controller-7.0.8\n+     helm.sh/chart:
nginx-ingress-controller-7.6.10\n      app.kubernetes.io/instance:
helmfile-ingress-nginx\n      app.kubernetes.io/managed-by: Helm\n
app.kubernetes.io/component: controller\n...\n      metadata:\n
labels:\n          app.kubernetes.io/name: nginx-ingress-controller\n-
helm.sh/chart: nginx-ingress-controller-7.0.8\n+         helm.sh/chart:
nginx-ingress-controller-7.6.10\n          app.kubernetes.io/instance:
helmfile-ingress-nginx\n          app.kubernetes.io/managed-by: Helm\n
app.kubernetes.io/component: controller\n...\n
app.kubernetes.io/instance: helmfile-ingress-nginx\n
app.kubernetes.io/component: controller\n                  namespaces:\n-
- default\n+                   - \"default\"\n                  topologyKey:
kubernetes.io/hostname\n                weight: 1\n
nodeAffinity:\n...\n        terminationGracePeriodSeconds: 60\n
containers:\n          - name: controller\n-           image:
docker.io/bitnami/nginx-ingress-controller:0.43.0-debian-10-r0\n+
image: docker.io/bitnami/nginx-ingress-controller:0.47.0-debian-10-r0\n
imagePullPolicy: \"IfNotPresent\"\n+           # yamllint disable
rule:indentation\n            securityContext:\n
allowPrivilegeEscalation: true\n              capabilities:\n...\n
drop:\n                - ALL\n              runAsUser: 1001\n+           #
yamllint enable rule:indentation\n            args:\n              -
/nginx-ingress-controller\n              -
--default-backend-service=default/helmfile-ingress-nginx-nginx-ingress-controller-default-backend\n...\ndefault,
helmfile-ingress-nginx-nginx-ingress-controller, Role
(rbac.authorization.k8s.io) has changed:\n...\n  kind: Role\n  metadata:\n
name: helmfile-ingress-nginx-nginx-ingress-controller\n-   namespace:
default\n+   namespace: \"default\"\n    labels:\n
app.kubernetes.io/name: nginx-ingress-controller\n-     helm.sh/chart:
nginx-ingress-controller-7.0.8\n+     helm.sh/chart:
nginx-ingress-controller-7.6.10\n      app.kubernetes.io/instance:
helmfile-ingress-nginx\n      app.kubernetes.io/managed-by: Helm\n
rules:\n...\ndefault, helmfile-ingress-nginx-nginx-ingress-controller,
RoleBinding (rbac.authorization.k8s.io) has changed:\n...\n  kind:
RoleBinding\n  metadata:\n    name:
helmfile-ingress-nginx-nginx-ingress-controller\n-   namespace: default\n+
namespace: \"default\"\n    labels:\n      app.kubernetes.io/name:
nginx-ingress-controller\n-     helm.sh/chart:
nginx-ingress-controller-7.0.8\n+     helm.sh/chart:
nginx-ingress-controller-7.6.10\n      app.kubernetes.io/instance:
helmfile-ingress-nginx\n      app.kubernetes.io/managed-by: Helm\n
roleRef:\n...\n  subjects:\n    - kind: ServiceAccount\n      name:
helmfile-ingress-nginx-nginx-ingress-controller\n-     namespace: default\n+
namespace: \"default\"\ndefault,
helmfile-ingress-nginx-nginx-ingress-controller, Service (v1) has
changed:\n...\n  kind: Service\n  metadata:\n    name:
helmfile-ingress-nginx-nginx-ingress-controller\n-   namespace: default\n+
namespace: \"default\"\n    labels:\n      app.kubernetes.io/name:
nginx-ingress-controller\n-     helm.sh/chart:
nginx-ingress-controller-7.0.8\n+     helm.sh/chart:
nginx-ingress-controller-7.6.10\n      app.kubernetes.io/instance:
helmfile-ingress-nginx\n      app.kubernetes.io/managed-by: Helm\n
app.kubernetes.io/component: controller\n...\ndefault,
helmfile-ingress-nginx-nginx-ingress-controller, ServiceAccount (v1) has
changed:\n...\n  kind: ServiceAccount\n  metadata:\n    name:
helmfile-ingress-nginx-nginx-ingress-controller\n-   namespace: default\n+
namespace: \"default\"\n    labels:\n      app.kubernetes.io/name:
nginx-ingress-controller\n-     helm.sh/chart:
nginx-ingress-controller-7.0.8\n+     helm.sh/chart:
nginx-ingress-controller-7.6.10\n      app.kubernetes.io/instance:
helmfile-ingress-nginx\n      app.kubernetes.io/managed-by: Helm\ndefault,
helmfile-ingress-nginx-nginx-ingress-controller-default-backend, Deployment
(apps) has changed:\n...\n  kind: Deployment\n  metadata:\n    name:
helmfile-ingress-nginx-nginx-ingress-controller-default-backend\n-
namespace: default\n+   namespace: \"default\"\n    labels:\n
app.kubernetes.io/name: nginx-ingress-controller\n-     helm.sh/chart:
nginx-ingress-controller-7.0.8\n+     helm.sh/chart:
nginx-ingress-controller-7.6.10\n      app.kubernetes.io/instance:
helmfile-ingress-nginx\n      app.kubernetes.io/managed-by: Helm\n
app.kubernetes.io/component: default-backend\n...\n      metadata:\n
labels:\n          app.kubernetes.io/name: nginx-ingress-controller\n-
helm.sh/chart: nginx-ingress-controller-7.0.8\n+         helm.sh/chart:
nginx-ingress-controller-7.6.10\n          app.kubernetes.io/instance:
helmfile-ingress-nginx\n          app.kubernetes.io/managed-by: Helm\n
app.kubernetes.io/component: default-backend\n...\n
app.kubernetes.io/instance: helmfile-ingress-nginx\n
app.kubernetes.io/component: default-backend\n                  namespaces:\n-
- default\n+                   - \"default\"\n                  topologyKey:
kubernetes.io/hostname\n                weight: 1\n
nodeAffinity:\n...\n        terminationGracePeriodSeconds: 60\n
containers:\n          - name: default-backend\n-           image:
docker.io/bitnami/nginx:1.19.6-debian-10-r14\n+           image:
docker.io/bitnami/nginx:1.19.10-debian-10-r49\n            imagePullPolicy:
\"IfNotPresent\"\n            securityContext:\n              runAsUser:
1001\n...\n            livenessProbe:\n              failureThreshold: 3\n
httpGet:\n-               path: /\n+               path: /healthz\n
port: http\n                scheme: HTTP\n              initialDelaySeconds:
30\n...\n            readinessProbe:\n              failureThreshold: 6\n
httpGet:\n-               path: /\n+               path: /healthz\n
port: http\n                scheme: HTTP\n              initialDelaySeconds:
0\n...\n            resources:\n              limits: {}\n
requests: {}\n+           volumeMounts:\n+             - name:
nginx-config-volume\n+               mountPath:
/opt/bitnami/nginx/conf/bitnami/\n+               readOnly: true\n+
volumes:\n+         - name: nginx-config-volume\n+           configMap:\n+
name: helmfile-ingress-nginx-nginx-ingress-controller-default-backend\n+
items:\n+               - key: defaultBackend.conf\n+                 path:
defaultBackend.conf\ndefault,
helmfile-ingress-nginx-nginx-ingress-controller-default-backend, Service (v1)
has changed:\n...\n  kind: Service\n  metadata:\n    name:
helmfile-ingress-nginx-nginx-ingress-controller-default-backend\n-
namespace: default\n+   namespace: \"default\"\n    labels:\n
app.kubernetes.io/name: nginx-ingress-controller\n-     helm.sh/chart:
nginx-ingress-controller-7.0.8\n+     helm.sh/chart:
nginx-ingress-controller-7.6.10\n      app.kubernetes.io/instance:
helmfile-ingress-nginx\n      app.kubernetes.io/managed-by: Helm\n
app.kubernetes.io/component: default-backend\n...\ndefault,
helmfile-ingress-nginx-nginx-ingress-controller-default-backend, ConfigMap
(v1) has been added:\n- \n+ # Source:
nginx-ingress-controller/templates/default-backend-configmap.yaml\n+
apiVersion: v1\n+ kind: ConfigMap\n+ metadata:\n+   name:
helmfile-ingress-nginx-nginx-ingress-controller-default-backend\n+
namespace: \"default\"\n+   labels:\n+     app.kubernetes.io/name:
nginx-ingress-controller\n+     helm.sh/chart:
nginx-ingress-controller-7.6.10\n+     app.kubernetes.io/instance:
helmfile-ingress-nginx\n+     app.kubernetes.io/managed-by: Helm\n+
app.kubernetes.io/component: default-backend\n+ data:\n+
defaultBackend.conf: |-\n+     location /healthz {\n+       return 200;\n+
}\n+     \n+     location / {\n+       return 404;\n+     }\n\nAffected
releases are:\n  helmfile-ingress-nginx (bitnami/nginx-ingress-controller)
UPDATED\n\nIdentified at least one change\n").

This is a bug in the provider, which should be reported in the provider's own
issue tracker.