dag-andersen / argocd-diff-preview

Tool for rendering manifest changes on pull requests.
84 stars 6 forks source link

Templated values are not evaluated in ApplicationSets before executing `helm template` #29

Closed aokomorowski closed 2 months ago

aokomorowski commented 3 months ago

Hello there! First of all, I appreciate your work on this tool, it's easy to use and pretty straightforward. However, I encountered an issue.

Our ArgoCD ApplicationSets heavily relies on templating based on the labels of configured ArgoCD clusters. See this example:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: datadog
  namespace: argocd
spec:
  generators:
    - clusters: {}
  template:
    metadata:
      name: "datadog-{{name}}"
    spec:
      project: "datadog"
      sources:
        - repoURL: https://helm.datadoghq.com
          chart: datadog-operator
          targetRevision: 1.2.2
          helm:
            releaseName: "datadog"
            valueFiles:
              - $values/deployments/production/datadog/values/{{metadata.labels.datadog_instance}}.yaml
        - repoURL: git@github.com:redacted.git
          targetRevision: main
          ref: values
        - repoURL: git@github.com:redacted.git
          targetRevision: main
          path: deployments/production/datadog/manifests
          directory:
            recurse: true
            include: "{credentials,agents}/{{metadata.labels.datadog_instance}}.yaml"
      destination:
        server: "{{server}}"
        namespace: monitoring

This ApplicationSet installs a DataDog operator on every cluster registered in the ArgoCD with values set based on the label (DataDog offers multiple instances, e.g. US/EU/JP). This approach is working in the ArgoCD (by working I mean - it gets templated and applied correctly). However, when running Argocd-diff-preview I'm encountering this error:

āŒ Failed to process application: datadog-in-cluster with error: 
Failed to load target state: failed to generate manifest for source 1 of 3: rpc error: code = Unknown desc = `helm template . --name-template datadog --namespace monitoring --kube-version 1.30 --values <path to cached source>/deployments/production/datadog/values/{{metadata.labels.datadog_instance}}.yaml <api versions removed> --include-crds` failed exit status 1: Error: open <path to cached source>/deployments/production/datadog/values/{{metadata.labels.datadog_instance}}.yaml: no such file or directory

It seems to me that templating based on the metadata is not evaluated correctly.

I hope to dive deeper into the issue after my holidays.

dag-andersen commented 3 months ago

Hi - Thank you for reporting this! šŸ‘šŸ» This is an interesting problem that I haven't encountered before.

My initial debugging questions:

1) What ArgoCD version are you running in your live cluster? argocd-diff-preview picks the newest version unless a specific version is specified. 2) How do you provide argocd-diff-preview with cluster credentials? (Just checking that the labels are actually on the secret resources before saving the credentials under /secrets) 3) Does your live ArgoCD instance use any special settings in the argocd-cm or argocd-cmd-params-cm ConfigMaps? argocd-diff-preview mainly use default ArgoCD settings, but I'll add support for more options/customization if needed.

aokomorowski commented 2 months ago

Sorry for the late response, got sucked up into the whirl of projects straight away from the PTO šŸ˜…

AD.1:

{
    "Version": "v2.11.2+25f7504",
    "BuildDate": "2024-05-23T13:32:13Z",
    "GitCommit": "25f7504ecc198e7d7fdc055fdb83ae50eee5edd0",
    "GitTreeState": "clean",
    "GoVersion": "go1.21.9",
    "Compiler": "gc",
    "Platform": "linux/arm64",
    "KustomizeVersion": "v5.2.1 2023-10-19T20:13:51Z",
    "HelmVersion": "v3.14.4+g81c902a",
    "KubectlVersion": "v0.26.11",
    "JsonnetVersion": "v0.20.0"
}

Ad.2 That might be the cause I overlooked, we're using Argo to bootstrap itself - we have an Application resource that creates the Cluster Secrets. We've structured this in the repository like this:

clusters
ā”œā”€ā”€ development
ā”‚Ā Ā  ā”œā”€ā”€ application.yaml
ā”‚Ā Ā  ā””ā”€ā”€ cluster-secrets
ā”‚Ā Ā      ā””ā”€ā”€ cluster-01.yaml
ā””ā”€ā”€ production
    ā”œā”€ā”€ application.yaml
    ā””ā”€ā”€ cluster-secrets
        ā”œā”€ā”€ cluster-01.yaml
        ā””ā”€ā”€ cluster-02.yaml

I think that to get Argocd-diff-preview work with that setup I have to apply these application resources/cluster secrets first, right?

Example Cluster Secret looks like this:

apiVersion: v1
kind: Secret
metadata:
  name: cluster-01
  namespace: argocd
  annotations:
    managed-by: argocd.argoproj.io
  labels:
    argocd.argoproj.io/secret-type: cluster
    cluster_environment: development
    datadog_instance: "eu"
type: Opaque
stringData:
  name: "cluster-01"
  server: "https://kubernetes.default.svc"
  config: |
    {
      "tlsClientConfig": {
        "insecure": false
      }
    }

Ad. 3 Nothing that would relate to the issue, mostly OIDC providers setup

dag-andersen commented 2 months ago

Hi again :)

I think that to get Argocd-diff-preview work with that setup I have to apply these application resources/cluster secrets first, right?

Yes! You will need to provide argocd-diff-preview with the necessary credentials to access your external clusters.

There is a section in the README that describes how to place secrets in the /secrets folder: README#Private repositories and Helm charts

If your pipeline has access to your live Argo CD instance, you can do something like this::

jobs:
  build:
    ...
    steps:
      ...
    - name: Prepare secrets
      run: |
        mkdir secrets
        kubectl -n argocd get secret <the-cluster-secret>  -o yaml  > secrets/cluster-credentials.yaml

You can verify if argocd-diff-preview applies the secrests correctly by checking if the tool outputs šŸ¤« Applied 1 secrets

Let me know if this works as you expect or you still experience issues šŸš€

aokomorowski commented 2 months ago

It makes total sense, I'll try to set it up and I'll let you know of a result. Thanks!

aokomorowski commented 2 months ago

Yup, the issue with not interpolating variables was caused by cluster secrets not being applied, thanks for you help ā¤ļø