argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
16.45k stars 4.97k forks source link

Helm lookup Function Support #5202

Open rajivml opened 3 years ago

rajivml commented 3 years ago

Hello,

Happy new year Guys !!

So, I have this requirement to build the imagePath by reading the dockerRegistryIP value from configMap, so that I need not ask user explicitly where the registry is located.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ template "helm-guestbook.fullname" . }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: {{ printf "%s/%s:%s" **$dockerRegistryIP** .Values.image.repository .Values.image.tag }}

Helm3 has introduced support for this where they introduced a lookup function through which configMap can be read at runtime like this,

{{ (lookup "v1" "ConfigMap" "default" "my-configmap").data.registryURL}}

But the lookup function will return nil when templates are rendered using "helm dryrun" or "helm template" as a result when you parse a field on nil, you will see an exception like this,

"nil pointer evaluating interface {}.registryURL Use --debug flag to render out invalid YAML"

The solution which was proposed on stack overflow is to use "helm template --validate" instead of "helm template"

Can you guys add support for this ?

Right now am populating docker-registry-ip like this, but with this kustomize-plugin approach am loosing the ability to render values.yaml file as an config screen through which user can override certain values i.e. the fix to solve one issue has lead to an other issue

kubectl -n argocd get cm argocd-cm -o yaml
apiVersion: v1
data:
  configManagementPlugins: |
    - name: kustomized-helm
      generate:
        command: [sh, -c]
        args: ["DOCKER_REG_IP=$(kubectl -n registry get svc registry -o jsonpath={.spec.clusterIP}) && sed -i \"s/DOCKER_REGISTRY_IP/$DOCKER_REG_IP/g\" kustomization.yaml | helm template $ARGOCD_APP_NAME --namespace $ARGOCD_APP_NAMESPACE . > all.yaml && kustomize build"]
    - name: kustomized
      generate:
        command: [sh, -c]
        args: ["DOCKER_REG_IP=$(kubectl -n registry get svc registry -o jsonpath={.spec.clusterIP}) && sed -i \"s/DOCKER_REGISTRY_IP/$DOCKER_REG_IP/g\" kustomization.yaml | kustomize build"]
jessesuen commented 3 years ago

Note that even if we allowed configuring Argo CD to append the --validate arg when running the helm template command, the repo-server would still need to be given API server credentials (i.e. mount a service account token) in order to perform the lookup. We would not ever allow kubernetes credentials to repo-server as a default (though you are welcome to modify your deployment in your environment) so there would not be a value in adding a --validate option configuration.

Since you would anyways need a customized repo-server, you can already accomplish this today using a wrapper script around the helm binary which appends the argument in the script (coupled with the service account given to the repo server).

kvaps commented 3 years ago

@jessesuen, I guess this workaround is possible only with in-cluster configuration, and wont work for the external ones.

jessesuen commented 3 years ago

Ah yes, you are right about that unfortunately.

Gowiem commented 3 years ago

@jessesuen just coming across this issue and I'm running into the same. There is duplicate issue here as well: https://github.com/argoproj/argo-cd/issues/3640

You mentioned two things to accomplish as a work around for Argo not supporting this:

  1. Add a service account for Argo + mount the token -- This is straight forward and would be easy to implement.
  2. "using a wrapper script around the helm binary which appends the argument in the script" -- This I don't really get.

Can you expand more on the wrapper script? How would one inject that into a standard Argo deployment?

dvcanton commented 3 years ago

Hi @jessesuen, @Gowiem,

any updates on this?

Thanks in advance, Dave

Gowiem commented 3 years ago

@dvcanton I tried the Argo plugin / wrapper script approach that @jessesuen mentioned after asking about it directly in the Argo Slack. You can find more about that by looking at the plugins documentation.

Unfortunately, that solution seemed overly hacky and pretty esoteric to me and my team. Instead we've now moved towards not using lookup in our charts and copy / pasting certain configuration manually instead. It's not great and I wish Argo would support this, but doesn't seem like there is enough momentum unfortunately.

randrusiak commented 3 years ago

A lot of charts use build-in objects such as Capabilities to provide backward compatibility for old APIs. Capabilities.APIVersions works properly only with --validate flag because without this flag it returns only API versions without available resources. There is an example in grafana chart: https://github.com/grafana/helm-charts/blob/main/charts/grafana/templates/ingress.yaml#L7

kvaps commented 3 years ago

As about Capabilities, helm template command supports setting capabilities manually, ref https://github.com/argoproj/argo-cd/issues/3594

randrusiak commented 3 years ago

@kvaps take a look for an example which I posted. {{- $newAPI := .Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress" -}} it returns false because without --validate flag only APIs versions are returned.

kvaps commented 3 years ago

@randrusiak, it works to me:

# helm template . --set ingress.enabled=true --include-crds > /tmp/1.yaml
# helm template . --api-versions networking.k8s.io/v1/Ingress --set ingress.enabled=true --include-crds > /tmp/2.yaml
# diff -u /tmp/1.yaml /tmp/2.yaml
@@ -399,7 +399,7 @@
           emptyDir: {}
 ---
 # Source: grafana/templates/ingress.yaml
-apiVersion: extensions/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: RELEASE-NAME-grafana
@@ -417,9 +417,12 @@
         paths:

           - path: /
+            pathType: Prefix
             backend:
-              serviceName: RELEASE-NAME-grafana
-              servicePort: 80
+              service:
+                name: RELEASE-NAME-grafana
+                port:
+                  number: 80

my idea was that ArgoCD could provide repo-server list of api-versions from destination Kubernets API. eg:

kubectl api-versions

will return all available apiversions for the cluster.

Not sure if lookup function support can be implemented with the same simplicity, as it already requires direct access to the cluster.

randrusiak commented 3 years ago

@kvaps I understand how it works on the helm level, but I don't know how to pass this additional flag--api-versions networking.k8s.io/v1/Ingress via argo manifest. It's still unclear to me. Could you explain that to me? I'd appreciate your help.

kvaps commented 3 years ago

Actually my note was more likely for contributors than for users :) They could implement api-versions passing from the argocd-application-controller to the argocd-repo-server via API call.

On current stage, I think you got nothing to do. The only workaround for you is to add serviceAccount to your repo-server and use --validate flag for helm template (you would need to create small shell wrapper script for helm command, or use custom plugin). Unfortunately this is less secure and would work only with the single cluster (the current one).

Another option for you is to hardcode those parameters somewhere, eg. save the output of the following command:

kubectl api-versions | awk '{printf " --api-versions " $1 } END{printf "\n"}' 

And pass it to helm, using any suitable way for you, eg, you can still use wrapper script for helm, something like that:

cat /usr/local/bin/helm
#!/bin/sh
exec /usr/local/bin/helm.bin $HELM_EXTRA_ARGS "$@"

where:

or using custom-plugin

jgoeres commented 3 years ago

We also ran into the issue with the non-working lookup function. Background is that we want to make sure that for a certain service, a secure random password is generated instead of having a hardcoded default. If desired, the use can explicitly set his own password, but most people don't. Since we are using Helm's random function, the password is newly generated with each helm upgrade (resp. helm template), so the password would not remain stable. So we use the lookup function to check if the secret holding the password already exists and only generate the password if it doesn't. So effectively, it will only be generated initially on "helm install". With the non-working lookup function in Argo, we have the issue that the password will be regenerated on each sync, wreaking quite some havoc, as you might guess.

Our goal is to keep the helmchart usage as simple as possible and require as little parameters for simple installations. So I would like to keep the "generate a secure (and stable) random password" as the default for "pure" Helm usage. Is there a way to find out in the helmchart that we are actually running inside Argo? That would allow me to react to this and add a validation that enforces explicitly setting a password in Argo-based deployments.

krutsko commented 2 years ago

Any update on this issue?

sarahhenkens commented 2 years ago

Ran into this same problem today :( more context in https://cloud-native.slack.com/archives/C01TSERG0KZ/p1635024460105000

mosheavni commented 2 years ago

Same thing happens with aws-load-balancer-controller's mutating webhook that defines tls key, cert and CA: https://github.com/aws/eks-charts/blob/f4be91b5ae4a2959e821940a77d50dd0424841c1/stable/aws-load-balancer-controller/templates/_helpers.tpl It can't reuse the previously defined keys if it can't access the cluster, thus producing an Argo app that's always out of sync

Phil0ctetes commented 2 years ago

Ran into this today :(

Do you have any timeline when the lookup will be available?

Phil0ctetes commented 1 year ago

Ran into this today :(

Do you have any timeline when the lookup will be available?

It looks like you have overseen my question so I wanted to ask again. Is there any plan to get this in and when?

richie-tt commented 1 year ago

I run into the same issue, but maybe the lack of this function is a good reason to move to the fluxv2, which already supports it

https://github.com/fluxcd/helm-operator/issues/335

13013SwagR commented 1 year ago

FYI, this project provides a decent workaround https://github.com/kuuji/helm-external-val

chicco785 commented 1 year ago

FYI, this project provides a decent workaround https://github.com/kuuji/helm-external-val

@13013SwagR how do you use this plugin inside a template for argocd? can you provide an example?

Thx

chkp-alexgl commented 1 year ago

Is there a way to find out in the helmchart that we are actually running inside Argo? That would allow me to react to this and add a validation that enforces explicitly setting a password in Argo-based deployments.

In case this is still relevant for anyone (@jgoeres ?), a workaround we found for a similar issue is to perform a lookup for Namespace resources. Under 'helm template' it is empty.

chicco785 commented 1 year ago

Is there a way to find out in the helmchart that we are actually running inside Argo? That would allow me to react to this and add a validation that enforces explicitly setting a password in Argo-based deployments.

In case this is still relevant for anyone (@jgoeres ?), a workaround we found for a similar issue is to perform a lookup for Namespace resources. Under 'helm template' it is empty.

ciao @chkp-alexgl long time not talking, how are you? :)

jkroepke commented 1 year ago

Hi all,

I also run into this problem today. I had the use-case to fetch an annotations from an existing service account.

Since lookup functions does not work, I create a helm plugin to mitigate the issues. At least for me.

Project URL: https://github.com/jkroepke/helm-kubectl

It uses the helm plugin downloader syntax to perfectly integrate into ArgoCD.

Example ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: cortex spec: project: default source: helm: fileParameters: - name: "config.alertmanager.storage.azure.user_assigned_id" path: "kubectl://infra-cortex/sa/cortex/jsonpath={.metadata.annotations.azure\\\\.workload\\\\.identity/client-id}" ```

Checkout the README for installation and usage.

marcofranssen commented 1 year ago

I was also looking for a solution to use helm lookup or maybe some equivalent solution using Kustomize. Our usecase looks like this:

To make it more practical:

From a security point of view I don't really see a problem when this is about in-cluster target. For other clusters this might require some OIDC integration I suppose.

Even if I do the helm install from my local there is not really an issue with this approach as the user who is able to do a helm install from local already has permissions on the cluster to do this (RBAC cluster-role on my configmap).

Fran-Rg commented 1 year ago

Is there any recommended workaround from argocd peeps to achieve the same kind of idea as what @marcofranssen is describing. Instead of saving values in config maps, is there an option to save the values in argocd that could be loaded inside the template somehow?

hahasheminejad commented 1 year ago

This is not ideal, but we ended up generating a cluster-variables config map (values populated from the cluster-bootstrapper script) and installed it with the Argo helm chart (extraObjects). This custom config map then gets mapped into the repo server as environment variables.

extraObjects:
  - apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-variables
      namespace: argo-cd
    data:
      awsAccount: "123456789"
      awsRegion: "region"
      clusterName: "abc"
  repoServer:
    envFrom:
      - configMapRef:
          name: cluster-variables

Finally, all data fields become available in the repo server and you can write a custom plugin to substitute and deploy helm charts. Something like:

configManagementPlugins: |
  - name: helm-envsubst
    init:
      command: ["/bin/sh", "-c"]
      args: ["helm dependency build"]
    generate:
      command: ["sh", "-c"]
      args: ['helm template $ARGOCD_APP_NAME .  --namespace $ARGOCD_APP_NAMESPACE --set ${ARGOCD_ENV_APP_VALUES_FILES} | /usr/local/bin/envsubst ']

Note: I, unfortunately, haven't had time to write the plugin in a way that it works correctly with the value files/argo app values.

akessner commented 1 year ago

Hi , @hahasheminejad in which file did you put this block of code?

  repoServer:
    envFrom:
      - configMapRef:
          name: cluster-variables
hahasheminejad commented 1 year ago

Hi @akessner ,

It is defined in the helm value file to override here https://github.com/argoproj/argo-helm/blob/main/charts/argo-cd/values.yaml#L1849

If you don't deploy ArgoCD via helm, you can add (or patch) that line to the repo server's deployment manifest.

YuriMcQueen commented 1 year ago

I ran into the same issue. I am using a helm chart that auto-generates TLS secret and it checks if it exists with lookup function. The helm chart is in a git repository, so every commit to the repository triggers the autosync from ArgoCD, updating the TLS secret.

akessner commented 1 year ago

If I understand correctly, this solution doesn't work if you want to update the variables at run time. I wonder if there is a way to do it with kustomize instead of helm?

brought to you by the letters A, V, and I and the number 47

On Fri, Jan 6, 2023 at 4:25 PM YuriMcQueen @.***> wrote:

I ran into the same issue. I am using a helm chart that auto-generates TLS secret and it checks if it exists with lookup function. The helm chart is in a git repository, so every commit to the repository triggers the autosync from ArgoCD, updating the TLS secret.

— Reply to this email directly, view it on GitHub https://github.com/argoproj/argo-cd/issues/5202#issuecomment-1373707939, or unsubscribe https://github.com/notifications/unsubscribe-auth/AATJIMPMW5G3B7ZPTUJEU6DWRATOJANCNFSM4VYNXKFA . You are receiving this because you were mentioned.Message ID: @.***>

Fran-Rg commented 1 year ago

I'll share my method, which was done via helm dependency. I created a "data" helm chart which doesn't create any resource. Instead it creates a series of template functions. Since "Parent charts and subcharts can share templates. Any defined block in any chart is available to other charts." (see https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#sharing-templates-with-subcharts ), I added the data helm chart as a dependency and used the templates from the data chart to get some centralised values.

otherguy commented 1 year ago

We would like to use (randAlphaNum 24 | nospace) in our Helm charts to generate temporary passwords to allow 2 containers on the same pod to communicate with each other. Without the lookup function, we can only use diffing customization, which is kind of flaky.

Allowing the Helm lookup function would be useful to us.

jland-redhat commented 1 year ago

Wanted to add my voice to this. It would be a great thing to have, trying to get some base info about the cluster such as the domain to build out some urls that we can add to our UI.

gete76 commented 1 year ago

+1 lookup function needed for getting a list of namespaces

SRF-Audio commented 1 year ago

Another vote for this, and another instance of a frustrating, hacky limitation between all of these k8s ecosystem tools that are supposed to make things "easier."

yevon commented 1 year ago

Same issues here. Many helm charts do auto-generate passwords or tokens and there is no way to specify them. It would be nice having support for this. I don't know how to workarround this right now.

jlamande commented 1 year ago

+1

Does anyone know how ArgoCD is running helm deployments ? (sry, new to argocd)

think I found : https://github.com/argoproj/argo-cd/blob/master/reposerver/repository/repository.go#L1144

As long as helm template or helm install/upgrade --dry-run do not contact Kube Server, any lookup will never work (as looking up for a ConfigMap will always return empty map).

I suspect ArgoCD to do some kind of helm template ... | kubectl apply (in go) for computing things like diffs. In that case lookups will never work.

akessner commented 1 year ago

Does anyone know how ArgoCD is running helm deployments ? (sry, new to argocd)

That is precisely the problem. ArgoCD uses --dry-run to know what needs to be changed. And so it never has access to the data for the Lookup.

yevon commented 1 year ago

I had two main reasons for using the lookup function:

1. Reading other automatically generated secret values from other charts:

I just changed some behaviours so the pod initializes environment variables from the secret directly without needing the lookup, so It won't be a problem anymore. If the secret changes, the health probe would fail restarting the pod and reading the new values:

env:

2. Auto-generate token or passwords the very first time

I was using the lookup function to know when the token was empty or not. I still do have this problem, but there should be some workarround for this also. Does anyone know how to to this without helm?

Anyway, it seems that helm is about to support --dry-run=server that would allow the "lookup" function to work properly, but there are some security implications on this. I will try to avoid the use of lookup as much as I can, because of the security implications and the debugging nightmare of the lookup function that is kind of hacky.

crenshaw-dev commented 1 year ago

Yep, looks like --dry-run=server might happen in Helm (hopefully for helm template).

If that happens, we're a step closer to lookup support in Argo CD.

We now just need to figure out how to give Helm access to the destination cluster's resources without simply granting all Applications access to everything.

Whatever access control solution we land on should probably mimic whatever we implement with the proposed dynamic parameters feature (basically Helm lookup but in the Application manifest).

I need to think about it.... how can we give Helm access to resources on the destination cluster, but only the resources the App would have access to anyway? I can imagine some transformation from AppProject resource allow/deny lists to a k8s ClusterRole+Roles. But that’s complicated.

Do we just need to finally implement Project-level ServiceAccount impersonation?

I have more questions than answers right now, but I think that Helm PR is a huge step forward.

akessner commented 1 year ago

If ArgoCD can align the permission model with something like Rancher for the helm charts it will be a good first step.

On Sat, Feb 18, 2023, 01:13 Michael Crenshaw @.***> wrote:

Yep, looks like --dry-run=server might happen in Helm https://github.com/helm/helm/pull/9426/files (hopefully for helm template).

If that happens, we're a step closer to lookup support in Argo CD.

We now just need to figure out how to give Helm access to the destination cluster's resources without simply granting all Applications access to everything.

Whatever access control solution we land on should probably mimic whatever we implement with the proposed dynamic parameters https://github.com/argoproj/argo-cd/pull/12050/files feature (basically Helm lookup but in the Application manifest).

I need to think about it.... how can we give Helm access to resources on the destination cluster, but only the resources the App would have access to anyway? I can imagine some transformation from AppProject resource allow/deny lists to a k8s ClusterRole+Roles. But not all cluster access is done via roles.

Do we just need to finally implement Project-level ServiceAccount impersonation?

I have more questions than answers right now, but I think that Helm PR is a huge step forward.

— Reply to this email directly, view it on GitHub https://github.com/argoproj/argo-cd/issues/5202#issuecomment-1435383896, or unsubscribe https://github.com/notifications/unsubscribe-auth/AATJIMNLG6JMEVXDILLL53LWYAA2FANCNFSM4VYNXKFA . You are receiving this because you were mentioned.Message ID: @.***>

crenshaw-dev commented 1 year ago

Another idea for k8s API access:

The Argo CD Application controller already has a full cache of managed resources on a destination cluster.

We should implement an API that behaves like the k8s API but limits access to only GET and only the resources allowed within a given project.

It would enforce project restrictions by requiring that the requester send a project name. It would apply the usual project restrictions to that resource and then, if the resource is permitted in the project, return the resource from the cluster cache.

When rendering manifests, the repo-server should know which application it's rendering for. So it can send the project name to the "fake k8s API" to enforce project restrictions.

This has some advantages, compared to SA impersonation: 1) It's invisible to the user. There's no need to create or maintain a read-only role mimicking a project's restrictions, or configure a project to use a particular SA. Let's be real, lots of people would just grant read-all (or maybe read-all-but-secrets) and call it a day. 2) It's faster. Instead of reaching all the way out to the destination cluster to look up resources, it can just pull the resource from the cache (I'm asserting for the moment that the whole resource is cached; I need to validate that claim). 3) It exposes an API for which there's already a lot of tooling - if we found other uses for this "cache API," it would be easy to use.

Disadvantages: 1) It's a bit of a chore to implement. Besides writing the actual logic to implement an API, we need a way to handle requests across sharded controllers (proxy?) and secure the connection (more auto-generated TLS certs). 2) It is one less reason to implement SA impersonation, a feature which lots of folks want for other reasons.

Let's call this the "cache-backed fake k8s API" option - CBFKA for short (pithy, eh). I'm not sold on it, but I think it's worth considering.

leoluz commented 1 year ago

Another idea for k8s API access: The Argo CD Application controller already has a full cache of managed resources on a destination cluster.

Throwing out a slightly different idea here: What if we implement and provide this functionality as part of a new Helm CMP instead of doing it in repo server? With that we would avoid bringing more complexity and possibly reduced security risks to repo-server.

crenshaw-dev commented 1 year ago

@leoluz the CMP would still need a mechanism to access k8s to perform the lookups. Using a CMP would allow the user to make that decision, (e.g. they could implement their own mapping of kube configs to AppProjects), but if we wanted something like a proxy or a mock k8s API, we'd still need to implement that in the core product.

crenshaw-dev commented 10 months ago

Looks like the helm PR is about to be merged, and there's a WIP proposal for project-level account impersonation, which should give us the ability to take advantage of lookup in Argo CD. Please review/comment there: https://github.com/argoproj/argo-cd/pull/14255

crenshaw-dev commented 10 months ago

Okee-doke, helm template --dry-run=server support was merged! Next step towards Argo CD supporting this is https://github.com/argoproj/argo-cd/pull/14255

jacob-kuder commented 9 months ago

Is there any ETA on when the PR mentioned above will be completed and then a release will be cut for use by the public?

martynd commented 8 months ago

Looks like this was in last weeks Helm release

https://github.com/helm/helm/releases/tag/v3.13.0

crenshaw-dev commented 8 months ago

In my opinion, the next hurdle is to implement project-level service account impersonation so that the repo-server can actually use Helm's new functionality to connect to remote repositories. There is an ongoing proposal here: https://github.com/argoproj/argo-cd/pull/14255