argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
18.05k stars 5.51k forks source link

bug: ApplicationSet Controller CrashLoopBackoff when using `helm.valuesObject` and list generator #14912

Closed jessebot closed 11 months ago

jessebot commented 1 year ago

Checklist:

Describe the bug The ApplicationSet Controller pod seems to get itself into a CrashLoopBackoff state when using a generator (list generator in this case) to populate templated fields in a spec.template.spec.source.helm.valuesObject field. The values I have here work via helm directly in a values file and they also work when I use the ApplcationSet with the spec.template.spec.source.helm.values field instead. They only error when using valuesObject, which I know is still pre-release and not officially supported, but is still good to note. In the meantime, the workaround is to just use the helm.values field instead, for anyone else searching.

To Reproduce I tested with a list generator to template a nextcloud helm chart iwth overriden values.

This is the ApplicationSet yaml file (Collapsed by default) ```yaml --- apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: nextcloud-web-app-set namespace: argocd annotations: argocd.argoproj.io/sync-wave: "2" spec: generators: - list: elements: - nextcloudHostname: test.coolwebsite.com template: metadata: name: nextcloud-web-app spec: project: nextcloud destination: server: https://kubernetes.default.svc namespace: nextcloud source: repoURL: 'https://nextcloud.github.io/helm' targetRevision: 3.5.20 chart: nextcloud helm: valuesObject: ingress: enabled: true className: nginx annotations: nginx.ingress.kubernetes.io/proxy-body-size: 10G kubernetes.io/tls-acme: "true" cert-manager.io/cluster-issuer: letsencrypt-staging nginx.ingress.kubernetes.io/enable-cors: "false" nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For" nginx.ingress.kubernetes.io/server-snippet: |- proxy_hide_header X-Powered-By; rewrite ^/.well-known/webfinger /index.php/.well-known/webfinger last; rewrite ^/.well-known/nodeinfo /index.php/.well-known/nodeinfo last; rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json; location = /.well-known/carddav { return 301 $scheme://$host/remote.php/dav/; } location = /.well-known/caldav { return 301 $scheme://$host/remote.php/dav/; } tls: - secretName: nextcloud-tls hosts: - "{{nextcloudHostname}}" nextcloud: host: "{{nextcloudHostname}}" existingSecret: enabled: true secretName: nextcloud-admin-credentials usernameKey: username passwordKey: password tokenKey: serverinfo_token smtpUsernameKey: smtpUsername smtpPasswordKey: smtpPassword update: 1 mail: enabled: false configs: logging.config.php: |- 'file', 'logfile' => 'nextcloud.log', 'loglevel' => 1, 'logdateformat' => 'F d, Y H:i:s' ); video_previews.config.php: |- true, 'enabledPreviewProviders' => array ( 'OC\Preview\Movie', 'OC\Preview\PNG', 'OC\Preview\JPEG', 'OC\Preview\GIF', 'OC\Preview\BMP', 'OC\Preview\XBitmap', 'OC\Preview\MP3', 'OC\Preview\MP4', 'OC\Preview\TXT', 'OC\Preview\MarkDown', 'OC\Preview\PDF' ), ); proxy.config.php: |- array( 0 => '127.0.0.1', 1 => '10.0.0.0/8' ), 'forwarded_for_headers' => array('HTTP_X_FORWARDED_FOR'), ); nginx: enabled: true internalDatabase: enabled: false externalDatabase: enabled: true type: postgresql host: localhost:5432 user: nextcloud database: nextcloud existingSecret: enabled: true secretName: nextcloud-pgsql-credentials usernameKey: username passwordKey: nextcloudPassword postgresql: enabled: true global: postgresql: auth: username: nextcloud database: nextcloud existingSecret: nextcloud-pgsql-credentials secretKeys: userPasswordKey: nextcloudPassword adminPasswordKey: postgresPassword volumePermissions: enabled: true primary: podAnnotations: k8up.io/backupcommand: "sh -c 'PGDATABASE=\"$POSTGRES_DB\" PGUSER=\"$POSTGRES_USER\" PGPASSWORD=\"$POSTGRES_PASSWORD\" pg_dump --clean'" k8up.io/file-extension: .sql pgHbaConfiguration: |- local all all trust host all all 127.0.0.1/32 md5 host all nextcloud 10.0.0.0/8 md5 initdb: scripts: my_init_script.sql: | ALTER DATABASE nextcloud OWNER TO nextcloud; GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud; GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO nextcloud; persistence: enabled: true existingClaim: "nextcloud-postgresql" redis: enabled: false replica: replicaCount: 1 auth: enabled: true existingSecret: nextcloud-redis-credentials existingSecretPasswordKey: redis_password cronjob: enabled: true service: type: ClusterIP port: 8080 loadBalancerIP: nil nodePort: nil persistence: enabled: true existingClaim: nextcloud-files nextcloudData: enabled: false subPath: livenessProbe: enabled: false initialDelaySeconds: 45 periodSeconds: 15 timeoutSeconds: 5 failureThreshold: 3 successThreshold: 1 readinessProbe: enabled: true initialDelaySeconds: 45 periodSeconds: 15 timeoutSeconds: 5 failureThreshold: 3 successThreshold: 1 startupProbe: enabled: false initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 30 successThreshold: 1 hpa: enabled: false cputhreshold: 60 minPods: 1 maxPods: 10 metrics: enabled: true https: true token: "enabled" timeout: 10s image: tag: 0.6.0 podLabels: jobLabel: nextcloud-metrics service: annotations: prometheus.io/scrape: "true" prometheus.io/port: "9205" labels: jobLabel: nextcloud-metrics serviceMonitor: enabled: true namespace: "nextcloud" rbac: enabled: true syncPolicy: syncOptions: - ApplyOutOfSyncOnly=true automated: prune: true selfHeal: true ``` **Expected behavior** I excepted the applicationSet to produce an application with the templated values rendered. **Version** ```shell argocd: v2.7.10+469f257.dirty BuildDate: 2023-07-31T23:02:18Z GitCommit: 469f25753b2be7ef0905a11632a6382060bcae99 GitTreeState: dirty GoVersion: go1.20.6 Compiler: gc Platform: linux/amd64 WARN[0000] Failed to invoke grpc call. Use flag --grpc-web in grpc calls. To avoid this warning message, use flag --grpc-web. argocd-server: v2.8.0-rc7+1ee5010 BuildDate: 2023-08-03T15:13:16Z GitCommit: 1ee5010d6d55c7a57fd3f3b4f0a8df893d1748bb GitTreeState: clean GoVersion: go1.20.6 Compiler: gc Platform: linux/amd64 Kustomize Version: v5.1.0 2023-06-19T16:58:18Z Helm Version: v3.12.1+gf32a527 Kubectl Version: v0.24.2 Jsonnet Version: v0.20.0 ```

If it's helpful, I installed argo-cd via the latest helm chart argo-cd-5.42.1 with a parameter to override the global.image.tag to be v2.8.0-rv7 and then patched the ApplicationSet CRD to with the latest at time of writing.

Logs These are the logs from the ApplicationSet Controller pod:

time="2023-08-05T08:58:33Z" level=info msg="ArgoCD ApplicationSet Controller is starting" built="2023-08-03T15:13:16Z" commit=1ee5010d6d55c7a57fd3f3b4f0a8df893d1748bb namespace=argocd version=v2.8.0-rc7+1ee5010
time="2023-08-05T08:58:34Z" level=info msg="Starting configmap/secret informers"
time="2023-08-05T08:58:34Z" level=info msg="Configmap/secret informer synced"
time="2023-08-05T08:58:34Z" level=info msg="Starting webhook server"
time="2023-08-05T08:58:34Z" level=info msg="Starting manager"
time="2023-08-05T08:58:34Z" level=debug msg="received create event from owning an application"
time="2023-08-05T08:58:34Z" level=debug msg="received create event from owning an application"
time="2023-08-05T08:58:34Z" level=debug msg="received create event from owning an application"
time="2023-08-05T08:58:34Z" level=debug msg="received create event from owning an application"
time="2023-08-05T08:58:34Z" level=debug msg="received create event from owning an application"
panic: reflect: call of reflect.Value.Type on zero Value

goroutine 322 [running]:
reflect.Value.typeSlow({0x0?, 0x0?, 0x514065?})
        /usr/local/go/src/reflect/value.go:2610 +0x12e
reflect.Value.Type(...)
        /usr/local/go/src/reflect/value.go:2605
github.com/argoproj/argo-cd/v2/applicationset/utils.(*Render).deeplyReplace(0x2?, {0x34af120?, 0xc000ed6d68?, 0x3027f15?}, {0x34af120?, 0xc000ed6b58?, 0xc000ebc4a0?}, 0x1?, 0x1?, {0x0, ...})
        /go/src/github.com/argoproj/argo-cd/applicationset/utils/utils.go:89 +0x379
github.com/argoproj/argo-cd/v2/applicationset/utils.(*Render).deeplyReplace(0x38b3640?, {0x360c360?, 0xc000ed6d50?, 0xc00113a9e0?}, {0x360c360?, 0xc000ed6b40?, 0x10?}, 0xc00113a9e0?, 0x40?, {0x0, ...})
        /go/src/github.com/argoproj/argo-cd/applicationset/utils/utils.go:121 +0xb6d
github.com/argoproj/argo-cd/v2/applicationset/utils.(*Render).deeplyReplace(0x2?, {0x38b3640?, 0xc0010943e0?, 0x336ddf7?}, {0x38b3640?, 0xc0010942c0?, 0xc000ebc490?}, 0x1?, 0x1?, {0x0, ...})
        /go/src/github.com/argoproj/argo-cd/applicationset/utils/utils.go:76 +0x307
github.com/argoproj/argo-cd/v2/applicationset/utils.(*Render).deeplyReplace(0x3907620?, {0x38648e0?, 0xc001094360?, 0x4c0e760?}, {0x38648e0?, 0xc001094240?, 0x429145?}, 0xc00113ae58?, 0xf0?, {0x0, ...})
        /go/src/github.com/argoproj/argo-cd/applicationset/utils/utils.go:121 +0xb6d
github.com/argoproj/argo-cd/v2/applicationset/utils.(*Render).deeplyReplace(0x2?, {0x3907620?, 0xc001008420?, 0x31fdd3d?}, {0x3907620?, 0xc0010083b0?, 0xc000ebc360?}, 0x1?, 0x1?, {0x0, ...})
        /go/src/github.com/argoproj/argo-cd/applicationset/utils/utils.go:76 +0x307
github.com/argoproj/argo-cd/v2/applicationset/utils.(*Render).deeplyReplace(0x38f4240?, {0x383b0c0?, 0xc0010083f0?, 0x3269900?}, {0x383b0c0?, 0xc001008380?, 0xc000ebc268?}, 0x1?, 0xa0?, {0x0, ...})
        /go/src/github.com/argoproj/argo-cd/applicationset/utils/utils.go:121 +0xb6d
github.com/argoproj/argo-cd/v2/applicationset/utils.(*Render).deeplyReplace(0x2?, {0x38f4240?, 0xc000101918?, 0x3274184?}, {0x38f4240?, 0xc000101518?, 0xc000ebc330?}, 0x1?, 0x1?, {0x0, ...})
        /go/src/github.com/argoproj/argo-cd/applicationset/utils/utils.go:76 +0x307
github.com/argoproj/argo-cd/v2/applicationset/utils.(*Render).deeplyReplace(0x2?, {0x3847780?, 0xc000101918?, 0x313b013?}, {0x3847780?, 0xc000101518?, 0xc000ebc328?}, 0x1?, 0x1?, {0x0, ...})
        /go/src/github.com/argoproj/argo-cd/applicationset/utils/utils.go:121 +0xb6d
github.com/argoproj/argo-cd/v2/applicationset/utils.(*Render).deeplyReplace(0x39bf580?, {0x3759de0?, 0xc000101800?, 0xc0005a6020?}, {0x3759de0?, 0xc000101400?, 0x451c16?}, 0x4042e5?, 0x60?, {0x0, ...})
        /go/src/github.com/argoproj/argo-cd/applicationset/utils/utils.go:121 +0xb6d
github.com/argoproj/argo-cd/v2/applicationset/utils.(*Render).deeplyReplace(0x7fe66261cef8?, {0x39bf580?, 0xc0005a6030?, 0xc000600000?}, {0x39bf580?, 0xc000101400?, 0xc00113bcc0?}, 0x413d28?, 0x10?, {0x0, ...})
        /go/src/github.com/argoproj/argo-cd/applicationset/utils/utils.go:76 +0x307
github.com/argoproj/argo-cd/v2/applicationset/utils.(*Render).RenderTemplateParams(0x33cc340?, 0xc000ed6cc0?, 0x0, 0xc000ed6c00, 0x0?, {0x0, 0x0, 0x0})
        /go/src/github.com/argoproj/argo-cd/applicationset/utils/utils.go:219 +0x1cc
github.com/argoproj/argo-cd/v2/applicationset/controllers.(*ApplicationSetReconciler).generateApplications(_, {{{0x307e398, 0xe}, {0xc000c3a0f0, 0x14}}, {{0xc0006ad7e8, 0x15}, {0x0, 0x0}, {0xc000c324f0, ...}, ...}, ...})
        /go/src/github.com/argoproj/argo-cd/applicationset/controllers/applicationset_controller.go:521 +0xa64
github.com/argoproj/argo-cd/v2/applicationset/controllers.(*ApplicationSetReconciler).Reconcile(0xc001096000, {0x4c095b0, 0xc000ed6a20}, {{{0xc000c324f0, 0x6}, {0xc0006ad7e8, 0x15}}})
        /go/src/github.com/argoproj/argo-cd/applicationset/controllers/applicationset_controller.go:116 +0x185
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0xc0005476b0, {0x4c095b0, 0xc000ed69f0}, {{{0xc000c324f0?, 0x3848cc0?}, {0xc0006ad7e8?, 0xc000e76e20?}}})
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:114 +0x297
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0005476b0, {0x4c09508, 0xc000149130}, {0x356fa20?, 0xc000386380?})
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:311 +0x33a
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0005476b0, {0x4c09508, 0xc000149130})
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:266 +0x1d9
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:227 +0x85
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
        /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.0/pkg/internal/controller/controller.go:223 +0x545

As always, I know the maintainers of Argoproj are really busy, but thanks for your continued support and helpfulness :)

jessebot commented 1 year ago

I have tried this as of today, and it unfortunately still doesn't work. I'm using Argo CD v2.8.2+dbdfc71 deployed via the helm chart, and I'm not overriding the image at all. You can see my full values.yaml for the helm chart here as Argo CD manages itself (however the values are in-line yaml because of the valuesObject bug).

Here's an example of a working applicationSet using values: | instead of valuesObject: https://github.com/small-hack/argocd-apps/blob/380610565cb2681c10d56de8e8a0d771f4cf0cd1/nextcloud/nextcloud_argocd_appset.yaml

And here's the same applicationSet using valuesObject: https://github.com/small-hack/argocd-apps/blob/9d63a5928a9f0a793b0ebd8b884c146ffc1f1295/nextcloud/nextcloud_argocd_appset.yaml

When using the valuesObject, I see this ComparisonError in the ApplicationConditions when I check the Sync Error via the UI:

Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = Manifest generation error (cached): rpc error: code = FailedPrecondition desc = Failed to unmarshal "nextcloud_argocd_appset.yaml":

The same error also appears in the Application controller's pod logs:

{"application":"nextcloud",
"dest-namespace":"nextcloud",
"dest-server":"https://kubernetes.default.svc",
"level":"info",
"msg":"Sync operation to  failed: ComparisonError: Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = Manifest generation error (cached): rpc error: code = FailedPrecondition desc = Failed to unmarshal \"nextcloud_argocd_appset.yaml\": \u003cnil\u003e",
"reason":"OperationCompleted",
"time":"2023-09-07T13:21:04Z",
"type":"Warning"}

However, none of the pods crashed this time, so the experience is definitely better than last time I tried it :D

crenshaw-dev commented 1 year ago

I'm not able to reproduce the panic in either 2.8.2 or 2.9.2 with the given example ApplicationSet (list generator).

The only things I changed in the spec are the project and destination namespace (changed both to default).

I observed that the appset controller successfully created the app and that the app was populated with the correct valuesObject.

jessebot commented 1 year ago

Let me give this another try later today then! :) :crossed_fingers:

jessebot commented 11 months ago

Good news

The list generator is working! I have tested it and it is working as per my reproducing instructions above :) That ApplicationSet now lives here for easier reference. Glad to see that's working 👏

By the way, my current argocd version output is as follows:

# argocd cli is installed via linuxbrew with brew install argocd
argocd: v2.9.3+6eba5be.dirty
  BuildDate: 2023-12-02T00:36:55Z
  GitCommit: 6eba5be864b7e031871ed7698f5233336dfe75c7
  GitTreeState: dirty
  GoVersion: go1.21.4
  Compiler: gc
  Platform: linux/amd64
# server is installed via the helm chart
argocd-server: v2.9.3+6eba5be.dirty
  BuildDate: 2023-12-02T00:36:55Z
  GitCommit: 6eba5be864b7e031871ed7698f5233336dfe75c7
  GitTreeState: dirty
  GoVersion: go1.21.4
  Compiler: gc
  Platform: linux/amd64
  Kustomize Version: v5.2.1 2023-10-19T20:11:23Z
  Helm Version: v3.13.2+g2a2fb3b
  Kubectl Version: v0.24.2
  Jsonnet Version: v0.20.0

So, the only thing left to test is the secret plugin generator, which I created from your template.

Testing nexcloud with the appset-secret-plugin To do that, I've created a an application set secret plugin with Argo CD like this: ```yaml project: argo-cd source: repoURL: 'https://small-hack.github.io/appset-secret-plugin' targetRevision: 0.6.0 helm: releaseName: appset-secret-plugin values: | secretVars: existingSecret: "appset-secret-vars" token: existingSecret: "appset-secret-token" chart: appset-secret-plugin destination: server: 'https://kubernetes.default.svc' namespace: argocd syncPolicy: automated: prune: true selfHeal: true syncOptions: - ApplyOutOfSyncOnly=true ``` and secrets like this: ```yaml apiVersion: v1 kind: Secret metadata: name: appset-secret-vars namespace: argocd type: Opaque stringData: secret_vars.yaml: | nextcloudHostname: test.coolwebsite.com ``` and ```yaml apiVersion: v1 stringData: token: securetokenthatisreasonablycomplexforanexample kind: Secret metadata: name: appset-secret-token namespace: argocd type: Opaque ``` and finally, this configMap gets generated from the appset-secret-plugin helm chart, for reference: ```yaml apiVersion: v1 kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: appset-secret-plugin meta.helm.sh/release-namespace: argocd labels: app.kubernetes.io/managed-by: Helm argocd.argoproj.io/instance: appset-secrets-plugin name: secret-var-plugin-generator namespace: argocd data: baseUrl: http://appset-secret-plugin.argocd.svc.cluster.local token: $appset-secret-token:token ``` Now here's the same ApplicationSet from before, but with the secret plugin generator instead ([link](https://github.com/small-hack/argocd-apps/blob/main/demo/nextcloud-values-object/secret-generator/nextcloud_argocd_appset.yaml)) AND I've turned on `spec.goTemplate`: ```yaml --- apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: nextcloud-web-app-set namespace: argocd spec: goTemplate: true # generator allows us to source specific values from an external k8s secret generators: - plugin: configMapRef: name: secret-var-plugin-generator input: parameters: secret_vars: - nextcloudHostname template: metadata: name: nextcloud-web-app spec: project: nextcloud destination: server: https://kubernetes.default.svc namespace: nextcloud syncPolicy: syncOptions: - ApplyOutOfSyncOnly=true automated: prune: true source: repoURL: 'https://nextcloud.github.io/helm' targetRevision: 3.5.20 chart: nextcloud helm: valuesObject: ingress: enabled: true className: nginx annotations: nginx.ingress.kubernetes.io/proxy-body-size: 10G kubernetes.io/tls-acme: "true" cert-manager.io/cluster-issuer: letsencrypt-staging nginx.ingress.kubernetes.io/enable-cors: "false" nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For" nginx.ingress.kubernetes.io/server-snippet: |- proxy_hide_header X-Powered-By; rewrite ^/.well-known/webfinger /index.php/.well-known/webfinger last; rewrite ^/.well-known/nodeinfo /index.php/.well-known/nodeinfo last; rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json; location = /.well-known/carddav { return 301 $scheme://$host/remote.php/dav/; } location = /.well-known/caldav { return 301 $scheme://$host/remote.php/dav/; } tls: - secretName: nextcloud-tls hosts: - "{{ .nextcloudHostname }}" nextcloud: host: "{{ .nextcloudHostname }}" existingSecret: enabled: true secretName: nextcloud-admin-credentials usernameKey: username passwordKey: password tokenKey: serverinfo_token smtpUsernameKey: smtpUsername smtpPasswordKey: smtpPassword update: 1 mail: enabled: false configs: logging.config.php: |- 'file', 'logfile' => 'nextcloud.log', 'loglevel' => 1, 'logdateformat' => 'F d, Y H:i:s' ); video_previews.config.php: |- true, 'enabledPreviewProviders' => array ( 'OC\Preview\Movie', 'OC\Preview\PNG', 'OC\Preview\JPEG', 'OC\Preview\GIF', 'OC\Preview\BMP', 'OC\Preview\XBitmap', 'OC\Preview\MP3', 'OC\Preview\MP4', 'OC\Preview\TXT', 'OC\Preview\MarkDown', 'OC\Preview\PDF' ), ); proxy.config.php: |- array( 0 => '127.0.0.1', 1 => '10.0.0.0/8' ), 'forwarded_for_headers' => array('HTTP_X_FORWARDED_FOR'), ); nginx: enabled: true internalDatabase: enabled: false externalDatabase: enabled: true type: postgresql host: localhost:5432 user: nextcloud database: nextcloud existingSecret: enabled: true secretName: nextcloud-pgsql-credentials usernameKey: username passwordKey: nextcloudPassword postgresql: enabled: true global: postgresql: auth: username: nextcloud database: nextcloud existingSecret: nextcloud-pgsql-credentials secretKeys: userPasswordKey: nextcloudPassword adminPasswordKey: postgresPassword volumePermissions: enabled: true primary: pgHbaConfiguration: |- local all all trust host all all 127.0.0.1/32 md5 host all nextcloud 10.0.0.0/8 md5 initdb: scripts: my_init_script.sql: | ALTER DATABASE nextcloud OWNER TO nextcloud; GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud; GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO nextcloud; persistence: enabled: true existingClaim: "nextcloud-postgresql" redis: enabled: false replica: replicaCount: 1 auth: enabled: true existingSecret: nextcloud-redis-credentials existingSecretPasswordKey: redis_password cronjob: enabled: true service: type: ClusterIP port: 8080 loadBalancerIP: nil nodePort: nil persistence: enabled: true existingClaim: nextcloud-files nextcloudData: enabled: false subPath: livenessProbe: enabled: false initialDelaySeconds: 45 periodSeconds: 15 timeoutSeconds: 5 failureThreshold: 3 successThreshold: 1 readinessProbe: enabled: true initialDelaySeconds: 45 periodSeconds: 15 timeoutSeconds: 5 failureThreshold: 3 successThreshold: 1 startupProbe: enabled: false initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 30 successThreshold: 1 hpa: enabled: false cputhreshold: 60 minPods: 1 maxPods: 10 metrics: enabled: true https: true token: "enabled" timeout: 10s image: tag: 0.6.0 podLabels: jobLabel: nextcloud-metrics service: annotations: prometheus.io/scrape: "true" prometheus.io/port: "9205" labels: jobLabel: nextcloud-metrics serviceMonitor: enabled: true namespace: "nextcloud" rbac: enabled: true ``` here's the app I created via the argo cd web interface: ```yaml project: nextcloud source: repoURL: 'https://github.com/small-hack/argocd-apps.git' path: demo/nextcloud-values-object/secret-generator/ targetRevision: main destination: server: 'https://kubernetes.default.svc' namespace: nextcloud syncPolicy: syncOptions: - CreateNamespace=true - ApplyOutOfSyncOnly=true ```

and it seems to have worked! :tada:

Bad news

So, now, to make sure this issue is dead forever, hopefully, I went ahead and tried this with mastodon, but fails :( I'm really sick right now, but I've wanted to see this working for months, so here's a video of me trying to get this working (sorry for no subtitles and that I sound stuffy):

https://github.com/argoproj/argo-cd/assets/2389292/845f5033-efde-43c8-ae4b-c53b31dcae34

Here's the web interface:

screenshot showing error below

text of error:

Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = Manifest generation error (cached): rpc error: code = FailedPrecondition desc = Failed to unmarshal "mastodon_argocd_appset.yaml":

If I try to sync the application anyway, I get Sync Error and this text:

ComparisonError: Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = Manifest generation error (cached): rpc error: code = FailedPrecondition desc = Failed to unmarshal "mastodon_argocd_appset.yaml":

Screenshot of error in quote above

Is it something do with cache? I don't think I actually know how to clear Argo CD cache of any kind outside of hitting the refresh/hard refresh buttons in the web interface and those still produce the above errors :(

Here's the appset it says is nil: https://github.com/small-hack/argocd-apps/blob/e2114cad3bc080f6ab437626a9030df104b31a1c/mastodon/small-hack/app_of_apps/mastodon_argocd_appset.yaml

Here's the appset when it was working, just before the commit in the video where I changed values: | to valuesObject:: https://github.com/small-hack/argocd-apps/blob/02d7b9f928ad818ac464455b57e82165dbec46cd/mastodon/small-hack/app_of_apps/mastodon_argocd_appset.yaml

Perhaps this only occurs when you first use values followed by switching to valuesObject? I'm not sure, but this happens if I change any of my existing Argo CD ApplicationSets to use spec.valuesObject.

Here's the logs I could find in the applicationSet controller pod ```logtalk {"level":"debug","msg":"requeue: true caused by application mastodon-web-app\n","time":"2023-12-11T14:16:37Z"} {"generator":{"plugin":{"configMapRef":{"name":"secret-var-plugin-generator"},"input":{"parameters":{"secret_vars":["mastodon_hostname","mastodon_s3_endpoint","global_cluster_issuer"]}},"template":{"metadata":{},"spec":{"destination":{},"project":""}}}},"level":"info","msg":"generated 1 applications","time":"2023-12-11T14:16:37Z"} {"generator":{"plugin":{"configMapRef":{"name":"secret-var-plugin-generator"},"input":{"parameters":{"secret_vars":["mastodon_hostname","mastodon_s3_endpoint","global_cluster_issuer"]}},"template":{"metadata":{},"spec":{"destination":{},"project":""}}}},"level":"debug","msg":"apps from generator: [{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:mastodon-web-app GenerateName: Namespace: SelfLink: UID: ResourceVersion: Generation:0 CreationTimestamp:0001-01-01 00:00:00 +0000 UTC DeletionTimestamp:\u003cnil\u003e DeletionGracePeriodSeconds:\u003cnil\u003e Labels:map[] Annotations:map[argocd.argoproj.io/sync-options:ApplyOnly=true argocd.argoproj.io/sync-wave:3] OwnerReferences:[] Finalizers:[resources-finalizer.argocd.argoproj.io] ZZZ_DeprecatedClusterName: ManagedFields:[]} Spec:{Source:\u0026ApplicationSource{RepoURL:https://small-hack.github.io/mastodon-helm-chart,Path:,TargetRevision:5.0.0,Helm:\u0026ApplicationSourceHelm{ValueFiles:[],Parameters:[]HelmParameter{},ReleaseName:,Values:image:\n repository: ghcr.io/mastodon/mastodon\n pullPolicy: IfNotPresent\n\nmastodon:\n createAdmin:\n enabled: false\n existingSecret: mastodon-admin-credentials\n secretKeys:\n usernameKey: username\n passwordKey: password\n emailKey: email\n\n cron:\n # -- run `tootctl media remove` every week\n removeMedia:\n enabled: true\n schedule: \"0 0 * * 0\"\n\n # -- available locales: https://github.com/mastodon/mastodon/blob/main/config/application.rb#L71\n locale: en\n local_domain: mastodon.buildstars.online\n\n # -- Use of WEB_DOMAIN requires careful consideration: https://docs.joinmastodon.org/admin/config/#federation\n # You must redirect the path LOCAL_DOMAIN/.well-known/ to WEB_DOMAIN/.well-known/ as described\n # Example: mastodon.example.com\n web_domain: null\n\n # -- If set to true, the frontpage of your Mastodon server will always redirect to the first profile in the database and registrations will be disabled.\n singleUserMode: false\n\n # -- Enables \"Secure Mode\" for more details see: https://docs.joinmastodon.org/admin/config/#authorized_fetch\n authorizedFetch: false\n\n # -- Enables \"Limited Federation Mode\" for more detauls see: https://docs.joinmastodon.org/admin/config/#limited_federation_mode\n limitedFederationMode: false\n\n extraVolumes:\n - name: postgres-ca\n secret:\n secretName: mastodon-postgres-server-ca-key-pair\n defaultMode: 0440\n\n - name: postgres-client-certs\n secret:\n secretName: mastodon-postgres-mastodon-cert\n defaultMode: 0440\n\n extraVolumeMounts:\n - name: postgres-ca\n mountPath: /etc/secrets/ca\n\n - name: postgres-client-certs\n mountPath: /etc/secrets/mastodon\n\n s3:\n enabled: true\n existingSecret: \"mastodon-s3-credentials\"\n hostname: mastodon-s3.buildstars.online\n secretKeys:\n s3AccessKeyID: S3_USER\n s3AccessKey: S3_PASSWORD\n s3Bucket: BUCKET\n s3Endpoint: ENDPOINT\n s3Hostname: HOSTNAME\n\n secrets:\n # these must be set manually; autogenerated keys are rotated on each upgrade\n existingSecret: \"mastodon-server-secrets\"\n\n sidekiq:\n workers:\n - name: all-queues\n # -- Number of threads / parallel sidekiq jobs that are executed per Pod\n concurrency: 25\n # -- Sidekiq queues for Mastodon that are handled by this worker. See https://docs.joinmastodon.org/admin/scaling/#concurrency\n # See https://github.com/mperham/sidekiq/wiki/Advanced-Options#queues for how to weight queues as argument\n queues:\n - default,8\n - push,6\n - ingress,4\n - mailers,2\n - pull\n # Make sure the scheduler queue only exists once and with a worker that has 1 replica.\n - scheduler\n\n smtp:\n auth_method: login\n ca_file: /etc/ssl/certs/ca-certificates.crt\n delivery_method: smtp\n domain: mastodon.buildstars.online\n enable_starttls: 'auto'\n from_address: toots@mastodon.buildstars.online\n openssl_verify_mode: peer\n port: 587\n reply_to: no-reply@mastodon.buildstars.online\n tls: true\n # keys must be named `server`, `login`, `password`\n existingSecret: mastodon-smtp-credentials\n\n streaming:\n port: 4000\n # -- this should be set manually since os.cpus() returns the number of CPUs on\n # the node running the pod, which is unrelated to the resources allocated to\n # the pod by k8s\n workers: 1\n # -- The base url for streaming can be set if the streaming API is deployed to\n # a different domain/subdomain.\n base_url: null\n # -- Number of Streaming Pods running\n replicas: 1\n\n web:\n port: 3000\n # -- Number of Web Pods running\n replicas: 1\n minThreads: \"5\"\n maxThreads: \"5\"\n workers: \"2\"\n persistentTimeout: \"20\"\n\n metrics:\n statsd:\n # -- Enable statsd publishing via STATSD_ADDR environment variable\n address: \"\"\n\n # Sets the PREPARED_STATEMENTS environment variable: https://docs.joinmastodon.org/admin/config/#prepared_statements\n preparedStatements: true\n\ningress:\n enabled: true\n annotations:\n kubernetes.io/tls-acme: \"true\"\n cert-manager.io/cluster-issuer: letsencrypt-prod\n # ensure that NGINX's upload size matches Mastodon's\n nginx.ingress.kubernetes.io/proxy-body-size: 40m\n ingressClassName: nginx\n hosts:\n - host: mastodon.buildstars.online\n paths:\n - path: '/'\n tls:\n - secretName: mastodon-tls\n hosts:\n - mastodon.buildstars.online\n\n\n# https://github.com/bitnami/charts/tree/main/bitnami/elasticsearch#parameters\nelasticsearch:\n # `false` will disable full-text search\n # if you enable ES after the initial install, you will need to manually run\n # RAILS_ENV=production bundle exec rake chewy:sync\n # (https://docs.joinmastodon.org/admin/optional/elasticsearch/)\n enabled: true\n master:\n replicaCount: 1\n autoscaling:\n minReplicas: 1\n data:\n replicaCount: 1\n coordinating:\n replicaCount: 1\n ingest:\n replicaCount: 1\n\nexternalDatabase:\n enabled: true\n hostname: mastodon-postgres-rw.mastodon.svc\n port: \"5432\"\n database: mastodon\n user: mastodon\n existingSecret: \"mastodon-pgsql-credentials\"\n sslmode: \"verify-full\"\n sslcert: \"/etc/secrets/mastodon/tls.crt\"\n sslkey: \"/etc/secrets/mastodon/tls.key\"\n sslrootcert: \"/etc/secrets/ca/ca.crt\"\n\n# https://github.com/bitnami/charts/tree/main/bitnami/postgresql#parameters\npostgresql:\n enabled: false\n\n# https://github.com/bitnami/charts/tree/main/bitnami/redis#parameters\nredis:\n enabled: false\n hostname: \"mastodon-redis-master\"\n port: 6379\n auth:\n # with a key of redis-password set to the password you want\n existingSecret: \"mastodon-redis-credentials\"\n\nservice:\n type: ClusterIP\n port: 80\n\nexternalAuth:\n oidc:\n enabled: false\n oauth_global:\n # -- Automatically redirect to OIDC, CAS or SAML, and don't use local account authentication when clicking on Sign-In\n omniauth_only: false\n\n# -- https://github.com/mastodon/mastodon/blob/main/Dockerfile#L75\n# if you manually change the UID/GID environment variables, ensure these values match:\npodSecurityContext:\n runAsUser: 991\n runAsGroup: 991\n fsGroup: 991\n\nsecurityContext: {}\n\nserviceAccount:\n # -- Specifies whether a service account should be created\n create: true\n # -- Annotations to add to the service account\n annotations: {}\n,FileParameters:[]HelmFileParameter{},Version:,PassCredentials:false,IgnoreMissingValueFiles:false,SkipCrds:false,ValuesObject:nil,},Kustomize:nil,Directory:nil,Plugin:nil,Chart:mastodon,Ref:,} Destination:{Server:https://kubernetes.default.svc Namespace:mastodon Name: isServerInferred:false} Project:mastodon SyncPolicy:\u0026SyncPolicy{Automated:nil,SyncOptions:[ApplyOutOfSyncOnly=true],Retry:nil,ManagedNamespaceMetadata:nil,} IgnoreDifferences:[] Info:[] RevisionHistoryLimit:\u003cnil\u003e Sources:[]} Status:{Resources:[] Sync:{Status: ComparedTo:{Source:{RepoURL: Path: TargetRevision: Helm:nil Kustomize:nil Directory:nil Plugin:nil Chart: Ref:} Destination:{Server: Namespace: Name: isServerInferred:false} Sources:[] IgnoreDifferences:[]} Revision: Revisions:[]} Health:{Status: Message:} History:[] Conditions:[] ReconciledAt:\u003cnil\u003e OperationState:nil ObservedAt:\u003cnil\u003e SourceType: Summary:{ExternalURLs:[] Images:[]} ResourceHealthSource: SourceTypes:[] ControllerNamespace:} Operation:nil}]","time":"2023-12-11T14:16:37Z"} {"app":"mastodon-web-app","appSet":"mastodon-app-set","level":"info","msg":"unchanged Application","time":"2023-12-11T14:16:37Z"} {"applicationset":{"Namespace":"argocd","Name":"mastodon-app-set"},"level":"info","msg":"end reconcile","requeueAfter":1800000000000,"time":"2023-12-11T14:16:37Z"} ```

It's hard to read in the logs above, but I wonder if the issue has to do with this part since it says that Helm is nil? (grasping at straws):

Sync:{Status: ComparedTo:{Source:{RepoURL: Path: TargetRevision: Helm:nil Kustomize:nil Directory:nil Plugin:nil Chart: Ref:} Destination:{Server: Namespace: Name: isServerInferred:false} Sources:[] IgnoreDifferences:[]} Revision: Revisions:[]} Health:{Status: Message:} History:[] Conditions:[] ReconciledAt:\u003cnil\u003e OperationState:nil ObservedAt:\u003cnil\u003e SourceType: Summary:{ExternalURLs:[] Images:[]} ResourceHealthSource: SourceTypes:[] ControllerNamespace:} Operation:nil}]","time":"2023-12-11T14:16:37Z"}

At this point, I'm also happy to give Argo CD maintainers access to my machine to take a look, as I'm not sure what else to check. Again, as always, thanks so much for taking some time to look into my issues. I really appeciate everything the maintainers and community of this project do. 💙 Also sorry this was long winded.

crenshaw-dev commented 11 months ago

Ah, I think I know what it is.

When you switch to valuesObject, i.e. change from string to an unstructured object, you need to find every field which currently contains {{ whatever }} and wrap them in strings.

For example:

-               local_domain: {{ .mastodon_hostname }}
+               local_domain: '{{ .mastodon_hostname }}'

You have to do this because the JSON unmarshaller is encountering { and interpreting it as raw JSON. But you want it to interpret the field as a string containing {.

jessebot commented 11 months ago

@crenshaw-dev you were right!! :D Let me know if you're ever in Amsterdam and I will buy you a sandwich (when I don't have covid)! Thank you so much!!