argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
17.83k stars 5.44k forks source link

Authentication error for ArgoCD kustomization helmchart with private oic repository using Azure Container registery #16894

Open marcusnh opened 9 months ago

marcusnh commented 9 months ago

Checklist:

Describe the bug

We are experiencing a bug when creating an ArgoCD application with a kustomization file through an ArgoCD ApplicationSet. We want to reference an external helm chart in our Azure container registry using the helmchart generator in kustomization. Below is our kustomization file:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Referencing a public repo outside the main repository

helmCharts:
  - name: <ARO-REPO-NAME>
    repo: oci://ACR-NAME.azurecr.io/ARO-REPO-NAME
    version: 0.1.6-5
    releaseName:ARO-REPO-NAME
    namespace: poseidon2-dev
    valuesFile: values.yaml

The error we receive in our ArgoCD controller is the following:

level=error msg="finished unary call with code Unknown" error="Manifest generation error (cached): `kustomize build <path to cached source>/applicationsets/dev/demo-helm-2 --enable-helm` failed exit status 1: Error: Error: failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://<ACR-NAME>.azurecr.io/oauth2/token?scope=repository%!A(MISSING)<ARO-REPO-NAME>%!F(MISSING)<ARO-REPO-NAME>%!A(MISSING)pull&service=<ACR-NAME>.azurecr.io: 401 Unauthorized\n: unable to run: 'helm pull --untar --untardir <path to cached source>/applicationsets/dev/demo-helm-2/charts oci://<ACR-NAME>.azurecr.io/<ARO-REPO-NAME>/<ARO-REPO-NAME> --version 0.1.6-5' with env=[HELM_CONFIG_HOME=/tmp/kustomize-helm-3509023410/helm HELM_CACHE_HOME=/tmp/kustomize-helm-3509023410/helm/.cache HELM_DATA_HOME=/tmp/kustomize-helm-3509023410/helm/.data] (is 'helm' installed?): exit status 1" grpc.code=Unknown grpc.method=GenerateManifest grpc.service=repository.RepoServerService grpc.start_time="2024-01-17T09:44:38Z" grpc.time_ms=2.234 span.kind=server system=grpc

There seems to a problem with connecting to the ACR, but we have created a secret with access credentials and passed it to the ArgoCD instance. When creating a ArgoCD application with the same setup, it works fine:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: test-<ARO-REPO-NAME>
  namespace: gitops-developers # argocd instance namespace
spec:
  source:
    chart: <ARO-REPO-NAME>
    repoURL: <ARO-NAME>.azurecr.io
    targetRevision: 0.1.6-5
    helm:
      values: |
        application_name: "<ARO-REPO-NAME>-test"
        namespace: <ARO-REPO-NAME>-test

  destination:
    namespace: <ARO-REPO-NAME>-test
    server: https://kubernetes.default.svc
  project: <ARO-REPO-NAME>-test
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
      allowEmpty: true
    # syncOptions:
    #   - Replace=true

To Reproduce

To reproduce the error one has to create a argoCD applicationset with the following configuration:

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  annotations:
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
  labels:
    app.kubernetes.io/instance: <ARO-REPO-NAME>
  name: <ARO-REPO-NAME>-test
  namespace: gitops-developers

spec:
  generators:
  - git:
      directories:
      - path: applicationsets/test/*
      repoURL: MY-REPO-WHERE-KOSTUMIZATION-IS-DEFINED
      revision: HEAD
  template:
    metadata:
      name: <ARO-REPO-NAME>-apps-{{path.basename}}
      namespace: gitops-developers
    spec:
      destination:
        namespace: <ARO-REPO-NAME>-test
        server: https://kubernetes.default.svc
      project: <ARO-REPO-NAME>
      source:
        path: applicationsets/test/{{path.basename}}
        repoURL: MY-REPO-WHERE-KOSTUMIZATION-IS-DEFINED
        targetRevision: HEAD
      syncPolicy:
        automated:
          allowEmpty: true
          prune: true
          selfHeal: true

The create a kustomization file in the path: applicationsets/test/kustomization-file. Create a secret that gives access to ACR. We have used the kustomization definition defined above.

Expected behavior

That we would be able to sync the resources defined in the helm chart

Version

argocd: v2.9.2+c5ea5c4
  BuildDate: 2023-12-01T19:21:49Z
  GitCommit: c5ea5c4df52943a6fff6c0be181fde5358970304
  GitTreeState: clean
  GoVersion: go1.20.10
  Compiler: gc
  Platform: linux/amd64
  ExtraBuildInfo: {Vendor Information: Red Hat OpenShift GitOps version: v1.11.0}
marcusnh commented 9 months ago

After investigating the issue, this feature seems not yet available. It is not possible to use a private repository with the kustomization integration with Helm. Need to add this feature ref this issue.

fandujar commented 9 months ago

@marcusnh in your case you can upgrade argocd to 2.9.3 that will add support to OCI, but you will need to do some manual stuff to inject the credentials.

marcusnh commented 9 months ago

Could you tell me what the manual steps need to be done? When using the ArgoCD application, it is enough to use a helm secret. Can we not do something similar with the customize helmchart?

fandujar commented 9 months ago

@marcusnh you can follow the manual steps that Paul described here https://github.com/argoproj/argo-cd/issues/16623#issuecomment-1877669497

Personally, I built a proxy for my oci private repository and exposed it to ArgoCD.

reginapizza commented 9 months ago

@marcusnh have you been able to follow the steps from the comment above and if so, are they working for you or are you still facing the same issues?

marcusnh commented 9 months ago

@fandujar @reginapizza, we could not make it work, and I don´t think it is a suitable solution. The comment from Paul might solve the problem, but it is unsuitable to run in a production setup. To get the solution to work, one has to change the argocd-repo-server deployment. If the deployment is restarted in the case of, for instance, an ArgoCD upgrade, we will have to do the same configuration again. In addition, this solution does not make it scalable to use several OCI repositories. We don´t know all the repositories that might be used beforehand. The credentials need to be set together with the kustomization and helmchart config and not through some filesystem trick of the live ArgoCD deployment.

The current approaches, including manual filesystem changes or leveraging temporary credentials, are not viable for sustainable production use. These methods introduce significant challenges: The current approaches, including manual filesystem changes or leveraging temporary credentials, are not viable for sustainable production use. These methods introduce significant challenges:

Security and Stability Risks: Manual interventions in the filesystem of a running container go against best practices for containerized environments, potentially compromising security and stability.

Lack of Persistence: Such changes are ephemeral and do not survive pod restarts, leading to additional maintenance overhead and potential downtime.

Scalability Concerns: For organizations utilizing multiple private OCI registries, managing individual configurations and credentials for each is neither scalable nor practical.

Credential Management: The reliance on continuously refreshing credentials, especially in environments like AWS ECR where tokens expire frequently, adds unnecessary complexity and potential points of failure.

We need a solution that integrates seamlessly with ArgoCD, providing a secure, scalable, and maintainable way to manage private OCI registries. This solution should ideally:

  1. Support native handling of multiple private OCI registries within ArgoCD.
  2. Automate credential management, potentially integrating with cloud-native solutions like AWS IAM roles and IRSA, or equivalent in other cloud environments.
  3. Ensure configurations are persistent and not require manual intervention upon pod restarts or updates.
  4. Be well-documented and supported, aligning with the ArgoCD project's standards for production-ready features.
fandujar commented 9 months ago

@marcusnh I totally agree with you.

ArkShocer commented 9 months ago

Same problem here, would love if it gets fixed in an upcoming release.