argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
16.75k stars 5.07k forks source link

Add support for secrets in Application parameters #1786

Open niqdev opened 5 years ago

niqdev commented 5 years ago

It would be nice to have direct support for native secrets in the Application definition.

In the following use case, when referencing a public chart, it would be very handy to have the possibility to reference a secret directly.

# external-dns.yaml
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: external-dns
spec:
  project: kube-do
  source:
    repoURL: https://github.com/helm/charts.git
    path: stable/external-dns
    helm:
      parameters:
        - name: provider
          value: digitalocean
        - name: digitalocean.apiToken
          value: $(DIGITAL_OCEAN_TOKEN)
        - name: domainFilters
          value: $(EXTERNAL_DNS_DOMAIN)
      # suggested implementation
      secrets:
        - name: secrets-do
  destination:
    server: https://kubernetes.default.svc
    namespace: kube-do

# secrets-do.yaml
---
apiVersion: v1
kind: Secret
metadata:
  name: secrets-do
type: Opaque
stringData:
  config.yaml: |-
    # this could be overridden for example during bootstrap using
    # helm template . --values values.yaml --set digitalOceanToken=abc
    DIGITAL_OCEAN_TOKEN: {{ digitalOceanToken }}
    EXTERNAL_DNS_DOMAIN: {{ externalDnsDomain }}

Resources related to external-dns to have more details

niqdev commented 5 years ago

I would also suggest to add a secretsUpdatePolicy to monitor the secrets e.g. update, rotation, delete ...

alexec commented 5 years ago

Contributing guide if you'd like to raise a PR: https://argoproj.github.io/argo-cd/CONTRIBUTING/

alexmt commented 5 years ago

Argo CD has https://argoproj.github.io/argo-cd/user-guide/auto_sync/ feature which covers secretsUpdatePolicy. If auto-sync is enabled and secret changes when argocd will go ahead push updated manifests.

We would have to make application CRD changes and serval code changes in api server/app controller:

CRD changes

Argo CD supports string parameters for helm, jsonnet (plugins parameters are WIP ). In addition to value we need to support valueFrom:

   # helm specific config
    helm:
      # Extra parameters to set (same as setting through values.yaml, but these take precedence)
      parameters:
      - name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: username

    jsonnet:
      # A list of Jsonnet External Variables
      extVars:
      - name: foo
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: username

      # A list of Jsonnet Top-level Arguments
      tlas:
      - code: false
        name: foo
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: username

I think it is ok to add support only for helm first and jsonnet, plugins later.

Code changes:

Manifest generation is performed by argocd-repo-server which does not have permissions to access k8s api for security reasons. So parameter values should be resolved and passed to argocd-repo-server by argocd-api-server and argocd-application-controller.

The argocd-repo-serverand argocd-api-server request manifest generation using GenerateManifest GRPC call. There are only three places in code where we execute GenerateManifest method and pass an instance of ApplicationSource which includes parameters:

https://github.com/argoproj/argo-cd/blob/master/controller/state.go#L113 https://github.com/argoproj/argo-cd/blob/master/server/application/application.go#L193 https://github.com/argoproj/argo-cd/blob/master/util/argo/argo.go#L379

To support valueFrom we would have to add e.g. ResolveValuesFrom method which replaces all valueFrom with actual value in specified ApplicationSource instance and use it before executing GenerateManifest.

niqdev commented 5 years ago

Thanks for the detailed info! I'm not familiar with go and the codebase, but I'll try to look into it and raise a PR

niqdev commented 4 years ago

@alexmt I've finally got a bit of time and started to look into this, but I need a bit of help

Here a bunch of changes, but I have already few doubts

Do you have an architectural diagram of how client/server/api and all the moving parts are connected? Unfortunately I'm also a golang newbie and I'm having a bit of difficulty to understand how everything works

Thanks in advance for the help :blush:

alexmt commented 4 years ago

Hello @niqdev , thank you for working on it!

patst commented 4 years ago

Hey @niqdev
did you make any progress on this? Otherwise I am thinking about giving it a shot. But I am not a Go expert either ;-)

niqdev commented 4 years ago

Hi @patst, unfortunately I don't have any time to make a concrete progress at the moment, even if I'd really like to. You are more then welcome to pick it up! So far all the changes that I've made are in the branch linked above. Let me know if you need more info. Thanks 😄

dcherman commented 4 years ago

Hi,

Just a heads up, I'm starting to tackle this using the same strategy as the helm-operator from WeaveWorks. Both secrets and configmaps would be supported.

My proposal is to introduce something like

valuesFrom:
  - secretKeyRef: ...
  - configMapKeyRef: ....

Since we already have support for passing inline values, the idea is to resolve any configmap/secrets that need to be fetched and merge them with the inline values before passing it off to the repo server. By doing that, it keeps the actual code that needs to change minimal.

Secrets/ConfigMaps would need to exist in the same namespace as ArgoCD which does present a problem; you could potentially reference other people's secrets/configmaps since Argo runs with the same service account.

Working around that seems like it'd require adding another resource type to ArgoCD which could then use argo's RBAC, but I'd like to hear someone's opinion that has more experience here.

Are we OK with the secret/configmap issue for an MVP with an explicit carveout that secrets prefixes with "argocd-" would be disallowed to avoid leaking argo-cd related secrets?

alexec commented 4 years ago

See https://github.com/argoproj/argo-cd/issues/1364.

jessesuen commented 4 years ago

First off, I do understand the motivation why some may feel this feature is necessary. Many Helm charts have a fundamental problem which allows secret values / sensitive information to be held in values.yaml files, essentially making values.yaml files sensitive objects. However, it's not Argo CD's responsibility to solutionize on what I consider a bad pattern or serious problems with the helm charts.

A well written chart should allow existing secrets to be referenced in the chart, rather than containing the sensitive information directly in the values.yaml. Here is a good example of this done properly: https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml#L113

Retrieving values from configmaps (https://github.com/argoproj/argo-cd/issues/1903) and secrets in the argocd control plane namespace is straying into the territory of config management, which is something we intentionally eschew from Argo CD.

This proposal (and https://github.com/argoproj/argo-cd/issues/1903) is essentially making Argo CD a centralized configuration/secret store, of which all applications can then reference secrets from. At this point, we would additionally need to implement RBAC rules on what applications are allowed to reference secrets.

Are we OK with the secret/configmap issue for an MVP with an explicit carveout that secrets prefixes with "argocd-" would be disallowed to avoid leaking argo-cd related secrets?

This is a perfect illustration of the type of problems we face if we go down this route. It is a slippery slope. Today, we already allow users to use arbitrary secrets in the argocd namespace to hold cluster credentials. These secrets do not necessarily start with the prefix argocd-. With this feature, these secrets would suddenly become available to devlopers who simply could reference the secrets in their application and suddenly obtain cluster/git credentials.

A second issue with these proposals is that it is introducing yet another source of truth of the manifests, which is different than git. Today Argo CD provides two sources of truth which affect the manifests generation:

Taking aside from the additional complexity of introducing another lookup as part of manifest generation, being able to reference configmaps and secrets in the argocd namespace, adds yet another source of truth is needs to be managed independently, which is antithetical to the GitOps model.

Lastly, these configmaps and secrets which are stored in the argocd control plane, are only under the control of an argocd operator, and outside the control over an application developer. To reference these secrets/configmaps would require coordination between developers and operators, violating the separation of concerns.

For these reasons, I do not believe this issue or https://github.com/argoproj/argo-cd/issues/1903 are ones that Argo CD should pursue.

jessesuen commented 4 years ago

I'm starting to tackle this using the same strategy as the helm-operator from WeaveWorks. Both secrets and configmaps would be supported.

The reason why this works for Helm Operator, but doesn't for Argo CD, is because these secrets are co-located with the HelmRelease instance, and are protected with the same Kubernetes namespace level RBAC. This means it is multi-tenant by default. Whereas in the case of Argo CD, the secrets are stored in the control plane namespace, not to mention mixed alongside Argo CD specific secrets, with zero RBAC protection and nothing stopping a developer from exposing the control plane secrets.

dcherman commented 4 years ago

@jessesuen So beyond referencing secrets in a chart where I actually agree that a correctly designed chart will allow you to do something like existingSecret, the actual use case that I personally have is generating a partial config during cluster creation via Terraform for use in some charts.

For example, I do not necessarily know the ARN of an IAM Role ahead of time if I'm trying to re-use the same repo to manage multiple clusters and need to reference that role either via kiam, or the new IAM for Service Accounts on EKS. My solution today has been to create a ConfigMap as part of Terraform that specifies just the parts that Terraform has knowledge of; the rest continues to be defined in git with the application.

WDYT?

niqdev commented 4 years ago

@jessesuen thanks for your feedback, I 100% agree with

Many Helm charts have a fundamental problem which allows secret values / sensitive information to be held in values.yaml files, essentially making values.yaml files sensitive objects. However, it's not Argo CD's responsibility to solutionize on what I consider a bad pattern or serious problems with the helm charts.

but even if it's not Argo CD's responsibility, I personally think that like everything there must be a compromise. In work we are using Argo CD managing the secrets with aws-secret-operator and IRSA using our own charts - no issue so far. So I'll give you the use case for why I've opened this issue...

I want to run on my DigitalOcean cluster (which doesn't have any out of the box solution to manage secrets like SSM) a bunch of applications/MVPs/POCs with very low security concerns and unfortunately I've encountered this issue. I wrote for fun an operator inspired by the former that maps CR to native secrets and I'm using external-dns that force me to override the apiToken. At this point the right solution would be to fork the chart, open a PR and probably risk that all my changes are unaccepted (I had already issues with my PRs not being merged cos they got stale or simply cos the proposed changes in this case would be to invasive). I would then be forced to maintain my own forked chart - which I don't wanna do.

Given this use case and the fact that I wanna still be 100% fully declarative using Argo CD, I would be happy to accept/allow security risks - if that's the concern - and I'm sure that other people would have other reasons or use cases.

WDYT and what would be your solution in situations like this one?

alexec commented 4 years ago

I don't think the discussion is closed on this yet, I think we're monitoring it because there is interest. @niqdev we document solutions here, do you want to add https://github.com/mumoshu/aws-secret-operator?

https://argoproj.github.io/argo-cd/operator-manual/secret-management/

niqdev commented 4 years ago

@alexec I'll create a PR for that

niqdev commented 4 years ago

2598 cc @alexec

dcherman commented 4 years ago

@jessesuen Putting some of the ideas that we were throwing around at KubeCon in here so they don't get lost:

Right now, the problem with the current proposal is that for multi-tenant solutions, it would be horribly insecure since applications would have access to config/secrets that they shouldn't necessarily have access to.

One potential way to solve this could be to enhance the custom tooling/plugins that you can provide to ArgoCD by providing the entirety of the application manifest to it via some file (manifest.json/yaml) which would then allow the custom plugin to combine the values in the manifest itself with values sourced elsewhere (configmaps, secrets, etc..).

That allows you to accomplish the proposal here, but punts on the security aspect by requiring the user to implement this themselves. One potential avenue that comes to mind is signing an app manifest, then using a plugin that verifies the provenance of your application before allowing it access to the requested secrets.

The actual functionality that would be needed for this seems to be minimal on the ArgoCD side of things; basically, just provide the entire app manifest to the plugin either via a file or environment variable that the user can parse and do whatever they want with.

We probably can/should then discuss how we extract some of the core functionality of Helm/Kustomize/etc.. out into plugins that can be iterated on independently of ArgoCD itself.

CarpathianUA commented 4 years ago

Looking forward to have this functionality! Thanks for your work, guys

dcherman commented 4 years ago

@jessesuen @alexec Just circling back to this - would something like what was proposed above be something that you'd be interested in / would accept into ArgoCD?

Basically, make the entire application manifest as either a file or environment var (not sure which atm) available to custom tools that can then parse it and do whatever they want with those values.

In addition, we would re-order https://github.com/argoproj/argo-cd/blob/57eeaa423157d42997f7147b054d8ee4cbc4a6b4/reposerver/repository/repository.go#L353-L361 such that custom plugins get detected first before the others, that way they could take advantage the existing types/structs for Helm, Kustomize, etc.. in custom plugins. While that is technically a breaking change, I'm not sure how many people actually include data for both something like Helm and a custom plugin in the same manifest since it wouldn't work at all.

That should solve my personal use case since I could easily do something like have my own helm plugin that does something like envsubst with values.yaml inlined into the application manifest. The actual values could still be provided via a configmap, but that becomes my plugin's responsibility to provide all of that, not Argo's.

edlee2121 commented 4 years ago

Using a projected serviceAccountToken is one way to inject config/secrets dynamically. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection

dcherman commented 4 years ago

Another idea for securely doing this:

Each project can have a Kubernetes ServiceAccount assigned to it (or created dynamically using the project name as part of the convention). When ArgoCD attempts to retrieve either ConfigMaps or Secrets, it would do so using the token from the associated service account.

Access to secrets or configmaps in any given namespace can then be governed by plain old Kubernetes RBAC.

Either way, I think we could potentially implement the ConfigMap part of this proposal and punt on the Secret part in order to avoid tackling the security implications right now.

cubic3d commented 3 years ago

Any agreement or progress on this topic? I'm seeing a couple of related issues without a solution and I was very surprised to find out that there is no possibility to merge values from a secret. Pulling and customizing charts is a tedious workaround to replace values and create encrypted secrets to push applications to git.

Gowiem commented 3 years ago

Another follow up here -- @dcherman's use-case regarding seeding configuration from ConfigMaps created by Terraform is exactly my problem coming from FluxV1 / helm-operator to ArgoCD. This is particularly painful for managing 3rd party charts which need AWS IAM Role ARNs when you're dealing with many environments / clusters. Looks like that will be one of my largest hangups in the switch from Flux => Argo.

amkartashov commented 3 years ago

Retrieving values from configmaps (#1903) and secrets in the argocd control plane namespace is straying into the territory of config management, which is something we intentionally eschew from Argo CD.

May be it should be allowed to inject cm/secret from application destination namespace then? I understand that this may create circular dependency in case if namespace is created by the app itself, but this will just become a known (and clear by its nature) bootstrap issue.

utkuozdemir commented 2 years ago

Is there any updates or established best practices on this issue? I think it's a big pain point on adopting ArgoCD since a long time. I'd be willing to contribute if there are any plans for addressing this.

zamrokk commented 2 years ago

no other choice than write your own Secret on a separated file ... but not on Argo Application CRD

utkuozdemir commented 2 years ago

I tackled this issue by using an optional secret mount into ArgoCD. Wrote a blog post about it here: https://utkuozdemir.org/blog/argocd-helm-secrets/ Sample repo: https://github.com/utkuozdemir/argocd-helm-secret-values-example

utkuozdemir commented 2 years ago

I solved my problem in the way I described in my comment above, but I think this is the one biggest thing missing in ArgoCD. I heard the same complaint from other people with them ending up choosing Flux over Argo for this sole reason.

I would therefore suggest addressing this issue with priority. Something as simple as valuesFrom->secretRef would solve the problem for good for a lot of people. I'd be willing to help if there are any plans to address this.

yellowmegaman commented 2 years ago

Extremely needed indeed!

StevenACoffman commented 2 years ago

In the existing ArgoCD secret management documentation, while it mentions several popular kubernetes secrets solutions... it doesn't really have any guidance or instructions for how to get them to work specifically with ArgoCD.

It is at least also perhaps noting in that document that the argoproj-labs/argocd-vault-plugin supports not only HashiCorp Vault, but also GCP Secret Manager, AWS Secrets Manager, Azure Key Vault, and IBM Cloud Secrets Manager.

I found that there was a pretty good write up on using external secrets with ArgoCD in this Kubernetes External Secrets article. It also had a few helpful opinions based on their experience with other solutions:

There were some minor disadvantages:

  • We can’t install ArgoCD via the operator because we need a customized image that includes KSOPS, so we have to maintain our own ArgoCD image.

And there was one major problem:

  • Using GPG-encrypted secrets in a git repository makes it effectively impossible to recover from a key compromise. Once a private key is compromised, anyone with access to that key and the git repository will be able to decrypt data in historical commits, even if we re-encrypt all the data with a new key.

Because of these security implications, we decided we would need a different solution (it’s worth noting here that Bitnami Sealed Secrets suffers from effectively the same problem).

Both external secrets and argoproj-labs/argocd-vault-plugin tools are looking to solve the same problem but in different ways.

alexluft commented 2 years ago

valuesFromRef loading the values from a secret located in application's namespace would just perfectly solve the issue for the most use-cases.

antevens commented 2 years ago

One thing to consider and an argument for handling this in Argo is the case where the original secret is in a separate k8s cluster (like a management cluster) but the target cluster for the Application is a production/dev/test cluster. Having the ability to look up values (and ideally keeping them in sync) would be very valuable in larger/distributed deployments.

yebolenko commented 2 years ago

any progress on this issue?

crenshaw-dev commented 2 years ago

There was some discussion above about pushing the problem to plugins so that folks can build a solution that exactly fits their use case and security risk tolerances. I think that could be done today using the env block in the plugin source spec. You could pass the secret ref via an env var and then fetch the secret in your plugin. The plugin sidecar would need to mount a service account token to give it k8s permissions.

I share Jesse's discomfort with a "native" solution. But I also recognize that the lack of that solution will push some folks to relatively hacky/risky workarounds.

If anyone feels there is enough here to go ahead and define a solution, I'd be eager to read a proposal.

Moep90 commented 2 years ago

@crenshaw-dev I dont get the difference here. Other tools like argo-workflow already supports the valueFrom reference. I'd love to have at least a similar approach to things within one company....

In my case, I've to provide a VAULT_TOKEN to my plugin. If I cannot supply the token from an existing secret the whole app-of-apps idea doesnt work for me. Although the repetition does not scale well. To repeat repository.env for every application instead of defining it once would reduce the code by like 5 lines per Application.

See the following references:

dcherman commented 2 years ago

@Moep90 The big difference between those two is that Argo Workflow runs workflows using a specific service account, so you can easily restrict access to secrets/resources using plain Kubernetes RBAC. ArgoCD currently runs all of the manifest generation using a single service account which would make this very insecure for multi-tenant usage.

The two main workarounds/proposals that I made in previous comments were:

  1. Generate a service account per project which is used during manifest generation. By doing that, you can use plain RBAC to restrict access to specific secrets/configmaps which should enable multi-tenancy similar to argo workflows.

  2. Re-order the manifest generation so that custom plugins are run before built-in ones. While that doesn't solve the issue, it would allow people to write their own plugin to solve this use case while still being able to use the built-in structs for helm, kustomize, etc.

Note: I haven't kept up to date on changes for this project in a bit, so recent changes could affect those proposals.

crenshaw-dev commented 2 years ago

@Moep90, @dcherman is exactly right. An Argo Workflow runs under a user-configurable ServiceAccount, which can be tuned to provide minimal privileges for just that Workflow. You can do something similar with repo-server by creating a ServiceAccount, giving it access to a Secret, and mounting that Secret in the repo-server (or a plugin sidecar). But, just as with Workflows, you have to do the ServiceAccount configuration part.

Sure, we could add valueFrom and just let it fail if the admin hasn't configured a ServiceAccount. But I worry that this would happen far too often:

Tenant 1 Developer: "Hi, Very Busy Admin, my app won't sync. It says it can't access my-secret." Very Busy Admin: "Oh, repo-server doesn't have access, I'll add that real quick." Tenant 1 Developer: "Cool it works, thanks." Malicious Tenant 2 Developer: "Ooh, secrets..."

Forcing the admin to configure the ServiceAccount and mount the Secret(s) should make it abundantly clear that the Secrets are available to all tenants. If the admin is using a plugin, it also allows them to add logic to constrain how the secret is used, rather than having to treat the secret the same as just any-other-parameter.

Moep90 commented 2 years ago

@crenshaw-dev I got the concern.

Sure, we could add valueFrom and just let it fail if the admin hasn't configured a ServiceAccount. But I worry that this would happen far too often.

If this makes it into the code, it at first, should be highligted in the documentation (right where the field is avaiable to be configured). Secondly I'd suggest to enable ARGOCD to create an SA + required RBAC per Application since it all within the Argo eco-systme as well as the same namespace, right?

Anyhow would you please share an example code how you feel like this should work? I can imagine that as part of the docs on how you can achive secure mutli-tenancy or similar as some part of "Best-Practise".

Moep90 commented 2 years ago

@crenshaw-dev is it possible to provide an wxample for your RBAC secret solution?

crenshaw-dev commented 2 years ago

Secondly I'd suggest to enable ARGOCD to create an SA + required RBAC per Application since it all within the Argo eco-systme as well as the same namespace, right?

I'm not sure what you mean. If I understand correctly, Argo CD would have to create a SA/Role/RoleBinding for each Application which has at least one valueFrom field. It would then have to apply that SA to its own repo-server deployment (non-declaratively). But even then it wouldn't work, because the deployment can only have one SA. So Argo CD would have to maintain the Role settings to allow the repo-server to access all secrets currently referenced by valueFrom. And that's risky if multiple tenants are using the same repo-server.

My suggestion would be to have an application like this:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/argoproj/argocd-example-apps.git
    targetRevision: HEAD
    path: guestbook
    plugin:
      name: mypluginname
      env:
        - name: SECRET_SECRETNAME
  destination:
    server: https://kubernetes.default.svc
    namespace: guestbook

Configure the plugin to populate the env var:

apiVersion: argoproj.io/v1alpha1
kind: ConfigManagementPlugin
metadata:
  name: cmp-plugin
spec:
  version: v1.0
  generate:
    command:
    - sh
    - -c
    - |
      secret_refs=$(env | grep SECRET_)
      for secret_ref in $secret_refs; do
        # Use kubectl to get the secret value and set it in the env var.
      done
      # Run remaining generate logic.
  discover:
    fileName: "./subdir/s*.yaml"

Finally, install the plugin as a sidecar and use a ServiceAccount with access to all the secrets you want to make available to the users. (NOTE: when using sh -c to write plugins, write the script securely! Follow scripting best practices and never eval unsanitized user input.)

This is just a rough sketch of how it could work.

dcherman commented 2 years ago

@crenshaw-dev FWIW you wouldn't necessarily need a separate repo-server per application (which wouldn't scale at all), you would basically need to look up the secret associated with the serviceaccount with a specific project and make any requests to the k8s api using that token. I never got around to prototyping it, but that was the basic idea

crenshaw-dev commented 2 years ago

@dcherman ah, I see.

Generate a service account per project which is used during manifest generation. By doing that, you can use plain RBAC to restrict access to specific secrets/configmaps which should enable multi-tenancy similar to argo workflows.

We'd have to give the repo-server access to those tokens somehow, but then I think we can trust it to enforce the project boundaries. I like that. I'm a little uneasy about loading secrets / tokens into the repo-server by default (given just how much code and external tools there are running in that pod + all the difficulty we've had with protecting env vars from improper access and files from path traversal), but I think as an opt-in it would be fine.

dcherman commented 2 years ago

@crenshaw-dev Yep. If I recall correctly, you can find the secret associated with a given service account in the status field, so during manifest generation you would find the service account associated with the app being worked on, find the correct secret, fetch the contents, then use that token for any requests to the k8s api.

It's been a while since I worked on this so I don't recall the exact architecture here, but I think there is a repo-server that's responsible for generating manifests (which is where plugins run) and an app controller responsible for actually doing reconciliation. That app controller could pass along the required token to the repo-server as part of the request, so the repo-server itself would have no access to read secrets, meaning that there should be no concerns about a malicious plugin reading secrets that it should not have access to.

Alternatively, the app controller could handle all of the logic required for getting the values of configmaps/secrets with none of the work being done in the repo server. Those values could then be passed along to the repo-server. I believe that's how my original PR worked, but it was lacking the serviceaccount stuff to enable this to be done securely.

crenshaw-dev commented 2 years ago

I think there is a repo-server that's responsible for generating manifests (which is where plugins run) and an app controller responsible for actually doing reconciliation.

Yep!

That app controller could pass along the required token to the repo-server as part of the request

Some manifest generation requests come directly from the UI (API server), so the logic would have to be in both places. iirc that communication is over plaintext gRPC, so we'd probably want to implement TLS among those services before making this change.

ramizpolic commented 2 years ago

It's been three years since the post initially started. What is the current state? Can we expect this anytime soon?

Currently, my workaround is to use kustomize and handle app deployments by pulling secrets from GCP buckets and overriding the default resources, but this becomes a tedious procedure for multiple apps. Additionally, on each version bump, things need to be reprocessed, so it's not a very efficient way; also, no support for UI-created manifests.

I am really thinking of switching to other tools, but I really enjoy the ease of declarative management Argo provides.

crenshaw-dev commented 2 years ago

Can we expect this anytime soon?

I doubt it can be expected within the next three minor releases or so (~9mo). This change would probably involve a proposal and review process.

pulling secrets from GCP buckets and overriding the default resources

Where does this happen? In a plugin?

ramizpolic commented 2 years ago

@crenshaw-dev Thanks for the update! Regarding the way to bypass the limitations, we are pulling the secrets via short script before applying the manifests, and disregarding them in later stages. The process just serves to embed the secrets (patch manifests with sensitive data) during deployment, and no sensitive information is available within repos.

crenshaw-dev commented 2 years ago

You bet! Gotcha... and does that script live in a plugin? Or is it somehow tied to the built-in Kustomize config management tool?

pixelicous commented 1 year ago

Big big pain point, we just pulled out of Argo due to not giving the proper attention to these matters. We do not believe it is mature enough, for 3 years it has been open with no end in sight.