argoproj-labs / argocd-vault-plugin

An Argo CD plugin to retrieve secrets from Secret Management tools and inject them into Kubernetes secrets
https://argocd-vault-plugin.readthedocs.io
Apache License 2.0
831 stars 192 forks source link

Argocd not using plugin and timeout on manual try #594

Open janluak opened 11 months ago

janluak commented 11 months ago

Hey guys,

thanks for the help in advance :)

Describe the bug Though having the avp sidecar running and all variables set the desired variable is not fetched from vault. When running sh manually in the avp sidecar the command argocd-vault-plugin generate secret.yaml times out.

To Reproduce These are my configs:

# plugin ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: cmp-plugin
data:
  avp.yaml: |
    apiVersion: argoproj.io/v1alpha1
    kind: ConfigManagementPlugin
    metadata:
      name: argocd-vault-plugin
    spec:
      allowConcurrency: true
      discover:
        find:
          command:
            - sh
            - "-c"
            - "find . -name '*.yaml' | xargs -I {} grep \"<path\\|avp\\.kubernetes\\.io\" {} | grep ."
      generate:
        command:
          - argocd-vault-plugin
          - generate
          - "."
      lockRepo: false
# values.yaml

global:
  image:
    tag: v2.9.3
crds:
  install: false

configs:
  secret:
    extra:
      VAULT_ADDR: http://vault.secrets
      AVP_TYPE: vault
      AVP_AUTH_TYPE: k8s
      AVP_K8S_ROLE: argocd

repoServer:
  env:
    - name: VAULT_ADDR
      value: vault.secrets
    - name: AVP_TYPE
      value: vault
    - name: AVP_AUTH_TYPE
      value: k8s
    - name: AVP_K8S_ROLE
      value: argocd

  volumes:
    - configMap:
        name: cmp-plugin
      name: cmp-plugin
    - name: custom-tools
      emptyDir: { }
  initContainers:
    - name: download-tools
      image: custom image from python:3.11-alpine with argocd cli + curl
      imagePullPolicy: Always
      env:
        - name: AVP_VERSION
          value: 1.17.0
        - name: http_proxy
          value: http://123.123.123.123:80
        - name: https_proxy
          value: http://123.123.123.123:80
        - name: no_proxy
          value: .intern,.svc,.local
        - name: KUBERNETES_SERVICE_HOST
          value: kubernetes.default.svc
      command: [ sh, -c ]
      args:
        - >-
          curl -L https://github.com/argoproj-labs/argocd-vault-plugin/releases/download/v$(AVP_VERSION)/argocd-vault-plugin_$(AVP_VERSION)_linux_amd64 -o argocd-vault-plugin &&
          chmod +x argocd-vault-plugin &&
          mv argocd-vault-plugin /custom-tools/
      volumeMounts:
        - mountPath: /custom-tools
          name: custom-tools
  extraContainers:
    - name: avp
      command: [ /var/run/argocd/argocd-cmp-server ]
      image: quay.io/argoproj/argocd:v2.9.3
      env:
        - name: VAULT_ADDR
          value: http://secrets-management-vault.secrets
        - name: AVP_TYPE
          value: vault
        - name: AVP_AUTH_TYPE
          value: k8s
        - name: AVP_K8S_ROLE
          value: argocd
      volumeMounts:
        - mountPath: /var/run/argocd
          name: var-files
        - mountPath: /home/argocd/cmp-server/plugins
          name: plugins
        - mountPath: /tmp
          name: tmp

        # Register plugins into sidecar
        - mountPath: /home/argocd/cmp-server/config/plugin.yaml
          subPath: avp.yaml
          name: cmp-plugin

        # Important: Mount tools into $PATH
        - name: custom-tools
          subPath: argocd-vault-plugin
          mountPath: /usr/local/bin/argocd-vault-plugin
# secret.yaml to parse to

apiVersion: v1
kind: Secret
metadata:
  name: secret
  namespace: {{ .Release.Namespace }}
  annotations:
    avp.kubernetes.io/path: "admin-secrets/my-secret"
stringData:
  client-secret: "<client-secret>"

Expected behavior For making sure my vault config is correct I added a deployment (see details) using the default agent-inject method → works.

```yaml apiVersion: apps/v1 kind: Deployment metadata: name: vault-access-check namespace: {{ .Release.Namespace }} labels: kind: examples app: {{ .Release.Name }} spec: selector: matchLabels: kind: examples app: {{ .Release.Name }} template: metadata: labels: kind: examples app: {{ .Release.Name }} annotations: vault.hashicorp.com/agent-inject: "true" vault.hashicorp.com/role: "argocd" vault.hashicorp.com/agent-inject-secret-test.json: "admin-secrets/my-secret" spec: serviceAccountName: argocd-test-repo-server containers: - name: nginx image: nginx ports: - containerPort: 80 ```

When interactively shelling in the avp sidecar container and copying the secret.yaml there I tried the command argocd-vault-plugin generate secret.yaml which results in a timeout. When simply asking argocd to apply the secret.yaml from git the secret is not fetched at all.

Screenshots/Verbose output

argocd-vault-plugin generate secret.yaml --verbose-sensitive-output
2023/12/20 08:08:49 reading configuration from environment, overriding any previous settings
2023/12/20 08:08:49 AVP configured with the following settings:

2023/12/20 08:08:49 avp_kv_version: 2

2023/12/20 08:08:49 Hashicorp Vault cannot retrieve cached token: stat /home/argocd/.avp/config.json: no such file or directory. Generating a new one
2023/12/20 08:08:49 Hashicorp Vault authenticating with Vault role argocd using Kubernetes service account token /var/run/secrets/kubernetes.io/serviceaccount/token read from ***
Error: context deadline exceeded
Usage:
  argocd-vault-plugin generate <path> [flags]

Additional context The installation of argo is called argocd-test and in namespace argocd-test to not interfere with the default installation on the cluster.

I tried playing around with the cluster role binding as mentioned in the docs somewhere but this didn't really help...


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: role-tokenreview-binding
  namespace: secrets | default | argocd-test
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
  - kind: ServiceAccount
    name: secrets-management-vault
    namespace: secrets | default
  - kind: ServiceAccount
    name: argocd-test-repo-server
    namespace: argocd-test | default

Additionally, I tried with a vault token on root level and the token method → same issues.

tmukherjee13 commented 3 weeks ago

try setting the proxy information in AVP container as well. That should work