argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
17.11k stars 5.2k forks source link

Argocd with kustomize & helm doesn't replace all "{{ .Release.namespace }}" references #17803

Open M0NsTeRRR opened 4 months ago

M0NsTeRRR commented 4 months ago

Checklist:

Describe the bug

After asking on CNCF Slack, I was recommended to create an issue here. Create an ApplicationSet with Git generator. A Kustomize resource deploys a Helm chart. The namespace is set in the ApplicationSet with spec.template.spec.source.kustomize.namespace and spec.template.spec.destination.namespace. The Helm chart is deployed into the cluster. All resources are created in the correct namespace, but the ClusterRoleBinding for the service account references the ArgoCD namespace.

Name:         cnpg-cloudnative-cloudnative-pg
Labels:       app.kubernetes.io/instance=cnpg-cloudnative
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=cloudnative-pg
              app.kubernetes.io/version=1.22.2
              argocd.argoproj.io/instance=cnpg-testcase
              helm.sh/chart=cloudnative-pg-0.20.2
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  cnpg-cloudnative-cloudnative-pg
Subjects:
  Kind            Name                             Namespace
  ----            ----                             ---------
  ServiceAccount  cnpg-cloudnative-cloudnative-pg  argocd

To Reproduce

Here is a test case repository : https://github.com/M0NsTeRRR/argocd-namespace-issue. The problem is not associated with this particular repository. I have encountered it with Grafana mimir distributed Helm chart too. Here is a direct link to the Helm template : https://github.com/cloudnative-pg/charts/blob/main/charts/cloudnative-pg/templates/rbac.yaml#L375.

Expected behavior

The clusterrolebiding should reference the application namespace. I've found this issue that I could reproduces using the repository test case linked : https://github.com/kubernetes-sigs/kustomize/issues/5566. I don't know if I'm facing the same issue.

Screenshots

Version I've tested also with argocd 2.9.11

argocd@argocd-application-controller-0:~$ argocd version
argocd: v2.10.6+d504d2b
  BuildDate: 2024-04-05T00:27:47Z
  GitCommit: d504d2b1d92f0cf831a124a5fd1a96ee29fa7679
  GitTreeState: clean
  GoVersion: go1.21.3
  Compiler: gc
  Platform: linux/amd64

Logs kubectl kustomize --enable-helm from github repository test case

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: cnpg-cloudnative
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: cloudnative-pg
    app.kubernetes.io/version: 1.22.2
    helm.sh/chart: cloudnative-pg-0.20.2
  name: cnpg-cloudnative-cloudnative-pg
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cnpg-cloudnative-cloudnative-pg
subjects:
- kind: ServiceAccount
  name: cnpg-cloudnative-cloudnative-pg
  namespace: default

Argocd application controller

time="2024-04-10T20:29:05Z" level=info msg="Normalized app spec: {\"status\":{\"conditions\":[{\"lastTransitionTime\":\"2024-04-10T20:29:04Z\",\"message\":\"MutatingWebhookConfiguration/cnpg-mutating-webhook-configuration is part of applications argocd/cnpg and cnpg-testcase\",\"type\":\"SharedResourceWarning\"},{\"lastTransitionTime\":\"2024-04-10T20:29:05Z\",\"message\":\"ValidatingWebhookConfiguration/cnpg-validating-webhook-configuration is part of applications argocd/cnpg and cnpg-testcase\",\"type\":\"SharedResourceWarning\"}]}}" application=argocd/cnpg
time="2024-04-10T20:29:05Z" level=info msg="Skipping auto-sync: most recent sync already to fc9d3adc7b208db22be68aa7f401e0fe53c0fb70" application=argocd/cnpg
time="2024-04-10T20:29:05Z" level=info msg="Update successful" application=argocd/cnpg
time="2024-04-10T20:29:05Z" level=info msg="Reconciliation completed" application=argocd/cnpg dedup_ms=0 dest-name= dest-namespace=cnpg dest-server="https://kubernetes.default.svc" diff_ms=7 fields.level=1 git_ms=57 health_ms=0 live_ms=3 patch_ms=35 setop_ms=0 settings_ms=0 sync_ms=0 time_ms=235
time="2024-04-10T20:29:06Z" level=info msg="Refreshing app status (controller refresh requested), level (0)" application=argocd/cnpg-testcase
time="2024-04-10T20:29:06Z" level=info msg="No status changes. Skipping patch" application=argocd/cnpg-testcase
time="2024-04-10T20:29:06Z" level=info msg="Reconciliation completed" application=argocd/cnpg-testcase dest-name= dest-namespace=cnpg-testcase dest-server="https://kubernetes.default.svc" fields.level=0 patch_ms=0 setop_ms=0 time_ms=26
time="2024-04-10T20:29:06Z" level=info msg="Refreshing app status (controller refresh requested), level (0)" application=argocd/cnpg-testcase
time="2024-04-10T20:29:06Z" level=info msg="No status changes. Skipping patch" application=argocd/cnpg-testcase
...

It stays in progressing, degraded.

panteparak commented 4 months ago

related to https://github.com/kubernetes-sigs/kustomize/issues/3815, https://github.com/kubernetes-sigs/kustomize/issues/5566

Workaround:

Use sed at the transformer level to replace problematic strings in yaml. Examples: https://github.com/panteparak/kustomize-sed-transformer