argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
16.69k stars 5.06k forks source link

argocd ha via helm: UI returns 404 page not found #10662

Closed AydinChavez closed 1 year ago

AydinChavez commented 1 year ago

Checklist:

Describe the bug

When executing kubectl port-forward svc/argocd-server -n argocd 9999:443

I get a 404 in the browser (localhost:9999)

To Reproduce

Chart.yaml

apiVersion: v2
name: argo-cd
version: 1.0.0
dependencies:
  - name: argo-cd
    version: 5.5.0
    repository: https://argoproj.github.io/argo-helm    

values.yaml

argo-cd:
  installCRDs: true
#  global:
#    image:
#      tag: v2.4.12
  dex:
    enabled: false
  configs:
    params:
      "server.enable.gzip": true
      "server.insecure": true
  server:
    extraArgs:
      - --insecure
    replicas: 1
    repositories: |
      - type: helm
        name: stable
        url: https://charts.helm.sh/stable
      - type: helm
        name: argo-cd
        url: https://argoproj.github.io/argo-helm
  redis:
    extraArgs:
      - --bind
      - "0.0.0.0"
  redis-ha:
    enabled: true
  controller:
    replicas: 1
  repoServer:
    replicas: 2
  applicationSet:
    replicas: 2

installation via helm: helm install argocd . -n argocd

Expected behavior

No 404

Screenshots

Version

argocd: v2.4.12+41f54aa
  BuildDate: 2022-09-16T01:12:58Z
  GitCommit: 41f54aa556f3ffb3fa4cf93d784fb7d30c15041c
  GitTreeState: clean
  GoVersion: go1.18.5
  Compiler: gc
  Platform: linux/amd64

Logs

argocd-server log:

time="2022-09-21T14:21:00Z" level=info msg="Starting configmap/secret informers"
time="2022-09-21T14:21:01Z" level=info msg="Configmap/secret informer synced"
time="2022-09-21T14:21:01Z" level=info msg="Starting configmap/secret informers"
time="2022-09-21T14:21:01Z" level=info msg="configmap informer cancelled"
time="2022-09-21T14:21:01Z" level=info msg="secrets informer cancelled"
time="2022-09-21T14:21:02Z" level=info msg="Configmap/secret informer synced"
time="2022-09-21T14:21:02Z" level=info msg="argocd v2.4.12+41f54aa serving on port 8080 (url: , tls: false, namespace: argocd, sso: false)"
time="2022-09-21T14:21:02Z" level=info msg="0xc000f0aa20 subscribed to settings updates"
time="2022-09-21T14:21:02Z" level=info msg="Starting rbac config informer"
time="2022-09-21T14:21:02Z" level=info msg="RBAC ConfigMap 'argocd-rbac-cm' added"
redis: 2022/09/21 14:22:06 pubsub.go:159: redis: discarding bad PubSub connection: EOF
AydinChavez commented 1 year ago

fixed the redis issue by setting bind 0.0.0.0 but the 404 when opening the browser in order to access the UI remains

the configmap argocd-cmd-params-cm looks also valid ?

apiVersion: v1
data:
  controller.log.format: text
  controller.log.level: info
  controller.operation.processors: "10"
  controller.repo.server.timeout.seconds: "60"
  controller.self.heal.timeout.seconds: "5"
  controller.status.processors: "20"
  otlp.address: ""
  redis.server: argocd-redis-ha-haproxy:6379
  repo.server: argocd-repo-server:8081
  reposerver.log.format: text
  reposerver.log.level: info
  reposerver.parallelism.limit: "0"
  server.basehref: /
  server.disable.auth: "false"
  server.enable.gzip: "false"
  server.insecure: "true"
  server.log.format: text
  server.log.level: debug
  server.rootpath: /
  server.staticassets: /shared/app
  server.x.frame.options: sameorigin
  timeout.hard.reconciliation: "0"
  timeout.reconciliation: "180"
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: argocd
    meta.helm.sh/release-namespace: argocd
  creationTimestamp: "2022-09-21T14:44:49Z"
  labels:
    app.kubernetes.io/component: server
    app.kubernetes.io/instance: argocd
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: argocd-cmd-params-cm
    app.kubernetes.io/part-of: argocd
    helm.sh/chart: argo-cd-5.5.0
  name: argocd-cmd-params-cm
  namespace: argocd
  resourceVersion: "53568254"
  uid: 260b815e-e6ee-41bc-a5ac-81058188b63b
rikirolly commented 1 year ago

Could it depend on "insecure: true" setting?

AydinChavez commented 1 year ago

It is set. Tried setting it with extraParams before also

Syntax3rror404 commented 1 year ago

Get same error since updated argo from 4.4.0 to 5.5.0

Syntax3rror404 commented 1 year ago

Restarting pods also not helped

Syntax3rror404 commented 1 year ago

Downgrade argo via helm to 4.4.8 fixed this issue

rikirolly commented 1 year ago

Downgrade argo via helm to 4.4.8 fixed this issue

Thanks a lot. I will also give it a try.

MeNsaaH commented 1 year ago

Same issue here. DOes it worth opening an issue in https://github.com/argoproj/argo-helm/issues

AydinChavez commented 1 year ago

Same issue here. DOes it worth opening an issue in https://github.com/argoproj/argo-helm/issues

done https://github.com/argoproj/argo-helm/issues/1479

Syntax3rror404 commented 1 year ago

Its a bug with 5.5.0 latest 4.4 version works (4.4.8)

AydinChavez commented 1 year ago

Its a bug with 5.5.0 latest 4.4 version works (4.4.8)

mind you share your values.yml? I still get same issue with 4.4.8. Which helm image tag is chart v4.4.8 using, do you know?

Syntax3rror404 commented 1 year ago

I use rancher :D i just set select version an then select 4.4.8 and it works again.

Syntax3rror404 commented 1 year ago

sorry very long

my helm values ``` apiVersion: catalog.cattle.io/v1 kind: App metadata: annotations: objectset.rio.cattle.io/applied: H4sIAAAAAAAA/+x8WXPbuLL/V1Hh/yrKWmzHVtX/QbEziSubr5czdU+SSkFgi8IIBHgAULbi0ne/hYWrKFmKZWcydZ4kAmADaPTy6wbABxSDxiHWGA0fEOZcaKyp4Mo8ivFfQLQC3ZFUdAjWmkGHigMaoiGaAosDnCSovbaduOMgg2g+Q0N0MO+1W+8pD///NRAJ+tHXOI4BDZGadkxPHQkMsILOvNfBMhIk7MwHW9FQCSaGkHsLLduISLBzvKExKI3jBA15ylgbMTwGtnHmU6ymaIh6eHyCx4dH3cnkhPQmfXxMBqeH4bgbnpLJ4eng9Pjw8OiUDExvfh6+e/dcG1Mb2eFewQQkcAIKDb88IJzQf4FUVHA0RPMeaqMxE2T22TQ9Bwba1kwwU9BGRHAtBWMg0VDLFNpoRrlZppzb2/EztWt7jHvHg0nvNDg66Z8Eh3CKg3HvFILeq37v6DQ8weHgFVp+W7aRSoAYhpEpltr82SBOWGo6wURP07HhJZliHpm5oqD1FX05s4/ht2HrjAHmadISkxYRfEKj1oQyUC0tWjOApKWnELfOr/73K/rKURsRrDETUWmVUhookUoCgYREeD5v0TLQi8TwiLBUaZBm9aqL0EemJCmXdA47PVMcgiKSJm5R0Kj1DljcslxpTYRsjWQkWmfn7RZuhUAYlljTObRbb6n+nCgzT015KlLVCoHROWqjqbDrNdU6UcODg4havhERH5jZJFL8Zf8EZjVRG1FiO86a2yoSdiTgUE8hFESZ+QI/UBqPGRxgpUCrAyYi0Ul4hNpoBos7IUMjfCjrwsw3k9GIapEo9K2NYky5xpSDdJJaEnH/UipZbTCmouMnQcUBWn6rqEZgu5jnjD3qHHZOzALMMUvBiU++FJ/nICUNfXGqhSKYUR6hITKrDFLHmOPI6IIpoDySoIyguRVNGCVWKq/BiiyeTCineoGGD0s7X0s3hHEa5foVysVVWqgbcMPED4BDkG8YkIouxqAlJWoUhmYAw5PuSRe1USIYJQujgQtOzLMUY3hNeVhq1zMDdLTDTI/hXks8soNyVsoWnIliBUrFb/i8/vyHFHGl7F+CpTF8FCnXqqEiL6MxjsBwwv65TBm7zKaA2sjoi6JaSP+sceQZnDd3pienx0T0h5Ax1u4FJqIPMAfmnjzPTHf5/D07Fcg5JbBqUJYli700DJb6E86VJshI5hQuhTFRJ92To2Ve9lFwMwlLPAypIY3ZhxLV+mgo1yDn2Ax70FX5yK/ADoXyYp3Kht4ybKWFIlOIfbUCI0Z2JMs20kydWdNnHpclL1IIrwIdlKx+G3ERwnWFSiLCUZ1jiQg/VJgWXgNJJdULI1Jwr325pMIWMqyU56pbddP/mREeNOyZAmc/PTnVRGuXBUTDV91ut7KYdzCeCjFDBaURIW4ADQStc4dMeTzbzLtaMJBZS8f/jLKR8cxGNJCsi4BVlkuspzmhqVCFnHtSNc5VJor19Ma5mksJE3qPXJk1vQc4oQfZyL5ZUXCUrRw4h2jH6b3UmYQQuKaY5SMgedENxAnDGgrm1CtWBCRKovewqD6sNJpxccffuVk/oMzfKzX9bmu+e4agMdXjlMxAd4SMWkpNA6lwazQajV4PPv3AZ70F6b+xjxd3o9Hof96M0jH9dNKDc4InMvqIlpXOVsaRWyEKDSUrzZUDQ02L7NzcNcg5yFEYU36JlTL+0C3f2uqPmmZLXG5zU1bgdsEHV+tBmX0rr7q9vTj3HszKcNao5Ab8olgvWqYRUc1wtUREqng2U9cLZsEWctYFpFaFrTFPNW4tq6BynZvESfIOy/AKjFu7BEmF4VjXAaVaac86QpF4PbyUgoBSwvgw1Ot6p+L5R2MQqb4GIriBJOi464zk5B1g5msNTjClGutUVaj1uxboOw2x6zUixOt3xb8Wra4Eg6uUQaMPkq7CqqHji3W+uUfplwob7SmUHHODT97Cv+/bHZu1XO2F0TlwUIabY9vbBFOWSriZSlBTwUI0HLh3KWbnwPAiX6KeMdt2qStFKrWcLxHotZGuLW9vuT0+KPnA3JOtLFhmbv2KrVnPBrBh4xn/1v6RR/+fgDwehR3heCs/WrBwS6yyDpPgkP4imfV4SDVjoV1gjzMiNUlSZVZvh39MOBSb2utK2xsxA575kkaM5LxXsGadmwGUFolgIlpcJ2YRzgRXWprYMG8wb4g15uUww4KSsBl3Ua40ZixPaAAkhdG2UxhFkYQIazDGWxXBGtyvOqyK0X4rE4KGR0dHr2oV77RObMVxreJjZoCOjo5OHrf21RDuycb/uQK3aEqkCcdDuKdhYn5y3zHvdwb9Tnd3F2JKL37SWa24n7rNeDnVfsaI9Ld1AiHcB8ris7+n4f910tJs+5/TUJfWYlkBGmXbViv2vItMi0pl2ezViksCXH2nZhGf7h6+PCDLEBNaoyE6UFMsISylrg3/SGCTwTZ/mHuSLw8I4kQvzqmXxOY3ltZeguSYXUFInW7fU6Upj8rBk4le3b+kEgQ6T308eHWahZKrMZM1yJ/d9kE2zUnKmBlRlrf0IRoTY6PI61TfjGLEKFZbWfaLySehLyUo4LpuYv+T4oWx8pXEsU24bpW1i2xq9QFNMnRuHV0bMQ/PKZ8Iu9MB+k7IWTakh1x8c7c8wSnT58AXF1m6xVZtZwUackvLNpqlY6ilhR17awwv9l+EppMsh9YY1woS3kqWMWAsXJJDMUxmG2DFZiCQme89B3GrNvjRhBySKR+pT4JfCaFzPLXJpWUAtY38/sgHgcPXmGFOtoalmyxZZUWCsdBr0GaahFjDtZZYQ2QFzI/nCjx5lyKLy7KX4cWNy0Q2gbdGiFZPtP9uSXW3pqfdbu+XwpndIUhVWB6JRF1ji023zYevgxp1rVqXR6wKOtUQZ55hR1XcjzbV4sd0nO9SFl66miNu8uOSRlHOxW3U0Gb5uJrSiV6VPmulvPvdEKNlnvaFQ62f9LFJOmaUdIDIDr5TB6EgM5AHjI4llosDN99MoV91up3DALOEctg9vippdY1fp71+byefs8OExlRzHFM3kwDujTGwWuem1Ov0jzvdIIQxxTzodQPZR2thcbNMuzTsxSUaok+CQ33PZLscm2OBF8iznORvG3FlgvM3CLZ2xxVtV3arQKLh6enp1kmxfURQ2cGYKgYtbQhW5MaZmudKcTmlmeKq1I2c7cu3rgpbaHeTUhMF+WGbh/dgamxxtg8Y48SQrDAur7kBpXO1twtUVu+ZwAzHmB+oKTBGpkBmReqn2zlymZ/Kqm+GUQgzJu4uJZ1TBhG8UQQzXD2jhBM8piybMQqlSNDwCxp9+IBMbPaIAPXsxrACQkScXEphorqy90m5CczPXZBhfVA1JlzdxvU2zC2KPyLCBMHMBoCbkODDMrerSDA6B9ntHjvb+L1kG1fSWc25h6PV3MPRapphkENxZ+RKFjHrMrD1bZRsNPL1rMkug1pNiPSbR1q3F0TiBLLIfsV4l6zyClbwMtnrHA6cUG7KkV40ukxbVTHyDdD8sXjcnRor7bxfhGiISBAHp+ru8Ic+jlHuxfy6xBTH1PB8ZqfuN/udPZ3Bn5SH4k7Va9RCaYi9HF9BRJXO4gFXdSnFX0C07T4JXv04ZNOVI1cSczIF2WF4/AOj/2YTtskmbGC9YSBOpLi3I7m4nB837SRvbdpXsI8xv3/YvY6ef7ooYEdPod2h8fOY4z1YX0doimU44pqOcsb4gCmTyc0mrCSO2bpk4tfvHHcOH0PVzh1WfVsZbTI6AbIgdopr9gLceDOD3O2tQtS6Vf5pG+ik95wqmdrQ7XUaRqCzfA8OP3O2WB/kHw9OupWtysE2aG6i3kqRJga3PeqZNyK72tFqu9FfpI8M3PdAaz1mL4HAWlqn2uBTCTMHCtdjA5+41ZTMFq5/a4o8szT400x1udDZYZMHp5lGxqxGMmrEcYgGLi4ggnMgpuAwW1V7AtVWL/1Brgd7PPVjOc2t4+QArcKTGSzcfHzoYIU4w+PL5XoVMpbe0TYDFnc859buyvXzcetjGrZi3lcDGWvUr4qTN02x2WqbChhuCJtAKqo0cO3cv+3aYpqPwp4o/oKuAId/SqrhMyeAvjXY6/UxVhsp+gPseaa3FG1S3ESEH+0p5Ri4zhfjswxBQmiGsHCHhGPQU0jtHH8qaO1Z4SzFpo5N1RNNcoxJg3oVaZoVj5UFqyjG9zHETly6JjzOnoP8xPNcGNfDIGAyNS0oDzJrFMT4PmBGuo5qFVoEd2YR3NGKcGx1T6UxGqIFmCmZMhEnRgDcmXFfDAkLQqpmDJQK7GHrrErhuVmar+ir9eghVYZrZyKOMXfn3v/4cHv97vw1art/1hVuyPCW3QRKJFxrkbhNJX8rwpG2xzrH1MRaqI0OjMkWnC0Cx8QDn14LJpgyMQcZ0EkQY2MFO2qKvtl0wBbnso7WbG0OGrY2G5F809bmkfF+dizWHXyqXWYp74htcw7n2cZYc2gaZEy5Fdi3EhO4rNI+7m5KYwrGKI9ubb21tHXXqYWEpr3DLKk9KIzK9xnkuf3IiSnKjHlR58Ze4FelrPVeW5+7pOOude3iezFET3+tP3dh9BaxtnWvXFMOzN+zaMpHZE2CfKu0bBxCcccDPDEgKKaMUVVsrtth5CKfz6h34mpifO8cbNE4wRIzBswqtULDo+WWqvmL9ccpSd9pyX9SIY0R67+Qwgy2UZjmfFuWXmtOqFURWCW0ShjVryWm/Bx0djfmoeSTjPqt9L9QRLP83FBuOZsdvx3XO6H09aI49lVBNSN2hxfK650P41xSvkNFHeaMU7UYi/vVNGeW3B4cdnplDIfPQOo/bAiECO4Qi+9JUWYdpy+ewaJSarR6uTGxuIrkY3x/PYM7vyvo33yfmYe7KfBbrrCmakLNixlELM5TN2x0l+9NPaAxTPGcijW5shjfX+UWxnnqq/LRR41lBPrs8vZWU0Z/uOPdIAlwbRf0qJu1+Wixwdpmj5/azmzgPo5t9x7fGE4WqaasCc3+90j33o50P/+Vr8PfeAcoEX+nQ3c2UvAxxD/l5PVq8kYFZb4/84m+5zxdrXLz//b9mzEmM+BhLmdrrj9YEXn7/s1EGlu4fXt36zY0vtEdQQCHAWNMeX6tmIQduMdxwqBDRGzDq7pp/10cVXWehTF5V74O6KafB02VybdRmEqflUUNOSCqVOoWL3Lg3TTyF/tRRXwSSedYg4UEDwizyOju1ATLV9cjS5mI0F2Rvnx/dt0zmuylN/c5n8DZGJfB6HcPT6yecbh7DRMb8hSxSyUQ9GoSGHS0/bWrXKiwadjJ65FpgCoXtTv2KgD3MNNvhDp25nfLqTjIWpn53gMpkbT8zJNyHWPEOkxEHeATIQn4lqWGbjcF5eNcsZau+E01G7yKMLpPuhgGXGWnFO0RtfLhsv2c8sguAGRsDBgeK7+jE5RGUNoa7jVvDf860LO3m7vW8NeTiz99j7d6eTen545nN7mPOzX68NqaOGemL6XQgghWfG7i3c3NZb9AN77vTyJ0QOd5ZkzV6M9rO7KGbOeuLGgGqLuG6v98vDr4bfHq3wuqli9hF8+r98jzqrMKQPu7IlwpUt0sbnVeGnXnOUtKedHveXren7Yr1/lUaIKV0lMp0miKHsHV2cWKi8uyA7BFN9IAOlI285u2A53mFWXXduJX/mNBheCGNxanOZXxeNpbQnedZdC1/rdcaDNnh4eDhssvJ93tr754Uk2EVDmIsG/Z7YlR5QDAdnude78yVIpm7MWYkf0GUBM4e55oxO5aAotzh2ZVZyJslFD5ftJtEkkcQosIg5W1Ec8JlUqfQ8LEwkK6frffD7qnQbd/0zsc9nvDbv/fVrIaW/V7N72jYW8wPDz+t7sEYj+GcMFbQoYgW1q0XMK+pafQcoxq3V60FiJtTfEcbPFEMCbuKI9awh2hHvrzVJbJ/y/7yFPrbIql/sq/8h0/A5V/y8BetfNz2PYTYiUL0fjxsPx7YBUx2/L6b2Ofy/Ze+ln9kM1+u9p0Y2C/PdVSNfsl/lx0q3c6d6Zd/bpcI8tVudEeqP80PefhP+JkhSSJ90ouSqJgBgu1b7o1Ud4vcYOB9k1TqWlgP98T2LBj3+Q1UwEBuQthnNAirO3MTuxn8Sq9pUqL+Mpb03NwB/kER40f4SrnH569M9Av0J/lW4nKC0wwcQdaH+nLJm1wqqdC+oxcY4fFttC2zu0ZesqR1lMpv6bcpu1ebCrrOnzajHZZju30eD/9/hT2eELXT8QiT+h5d2zyhM5eqp+dscv2fe2oeL+g+2eW3XUjeDERXjeA55TkR6zfS3X3REzusjTPHdq9REz3PCHW3mOrl1nA/C7oVjA3URVxczmRGJ4hFt+lrydaj1262l1UdqH+bIR3Fp3dePIEEbrWWMMkZdcNMflOxuVb+bPexbW73/uC3fq7Y8vSh80HyyLH94DE2K50+BY4ZOcA7BZBHGO5cBsr/xcAAP//AtE5wqJhAAA objectset.rio.cattle.io/id: helm-app objectset.rio.cattle.io/owner-gvk: /v1, Kind=Secret objectset.rio.cattle.io/owner-name: sh.helm.release.v1.argocd.v3 objectset.rio.cattle.io/owner-namespace: argocd creationTimestamp: "2022-09-02T14:21:07Z" generation: 6 labels: objectset.rio.cattle.io/hash: 1ab8ab450ff8c1f2a6c394db0d9cf493964459c3 managedFields: - apiVersion: catalog.cattle.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:objectset.rio.cattle.io/applied: {} f:objectset.rio.cattle.io/id: {} f:objectset.rio.cattle.io/owner-gvk: {} f:objectset.rio.cattle.io/owner-name: {} f:objectset.rio.cattle.io/owner-namespace: {} f:labels: .: {} f:objectset.rio.cattle.io/hash: {} f:ownerReferences: .: {} k:{"uid":"6a163f19-5828-4e9a-b19e-172159d8ad37"}: {} f:spec: .: {} f:chart: .: {} f:metadata: .: {} f:annotations: .: {} f:artifacthub.io/changes: {} f:catalog.cattle.io/ui-source-repo: {} f:catalog.cattle.io/ui-source-repo-type: {} f:apiVersion: {} f:appVersion: {} f:description: {} f:home: {} f:icon: {} f:keywords: {} f:maintainers: {} f:name: {} f:version: {} f:values: .: {} f:apiVersionOverrides: .: {} f:autoscaling: {} f:certmanager: {} f:ingress: {} f:applicationSet: .: {} f:affinity: {} f:args: .: {} f:debug: {} f:dryRun: {} f:enableLeaderElection: {} f:metricsAddr: {} f:policy: {} f:probeBindAddr: {} f:enabled: {} f:extraArgs: {} f:extraContainers: {} f:extraEnv: {} f:extraEnvFrom: {} f:extraVolumeMounts: {} f:extraVolumes: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:logFormat: {} f:logLevel: {} f:metrics: .: {} f:enabled: {} f:service: .: {} f:annotations: {} f:labels: {} f:portName: {} f:servicePort: {} f:serviceMonitor: .: {} f:additionalLabels: {} f:enabled: {} f:interval: {} f:metricRelabelings: {} f:namespace: {} f:relabelings: {} f:scheme: {} f:selector: {} f:tlsConfig: {} f:name: {} f:nodeSelector: {} f:podAnnotations: {} f:podLabels: {} f:podSecurityContext: {} f:priorityClassName: {} f:replicaCount: {} f:resources: {} f:securityContext: {} f:service: .: {} f:annotations: {} f:labels: {} f:port: {} f:portName: {} f:serviceAccount: .: {} f:annotations: {} f:create: {} f:name: {} f:tolerations: {} f:webhook: .: {} f:ingress: .: {} f:annotations: {} f:enabled: {} f:extraPaths: {} f:hosts: {} f:ingressClassName: {} f:labels: {} f:pathType: {} f:paths: {} f:tls: {} f:configs: .: {} f:clusterCredentials: {} f:credentialTemplates: {} f:credentialTemplatesAnnotations: {} f:gpgKeys: {} f:gpgKeysAnnotations: {} f:knownHosts: .: {} f:data: .: {} f:ssh_known_hosts: {} f:knownHostsAnnotations: {} f:repositories: {} f:repositoriesAnnotations: {} f:secret: .: {} f:annotations: {} f:argocdServerAdminPassword: {} f:argocdServerAdminPasswordMtime: {} f:argocdServerTlsConfig: {} f:bitbucketServerSecret: {} f:bitbucketUUID: {} f:createSecret: {} f:extra: {} f:githubSecret: {} f:gitlabSecret: {} f:gogsSecret: {} f:styles: {} f:tlsCerts: {} f:tlsCertsAnnotations: {} f:controller: .: {} f:affinity: {} f:args: .: {} f:appHardResyncPeriod: {} f:appResyncPeriod: {} f:operationProcessors: {} f:repoServerTimeoutSeconds: {} f:selfHealTimeout: {} f:statusProcessors: {} f:clusterAdminAccess: .: {} f:enabled: {} f:clusterRoleRules: .: {} f:enabled: {} f:rules: {} f:containerPort: {} f:containerSecurityContext: {} f:env: {} f:envFrom: {} f:extraArgs: {} f:extraContainers: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:initContainers: {} f:livenessProbe: .: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:logFormat: {} f:logLevel: {} f:metrics: .: {} f:applicationLabels: .: {} f:enabled: {} f:labels: {} f:enabled: {} f:rules: .: {} f:enabled: {} f:spec: {} f:service: .: {} f:annotations: {} f:labels: {} f:portName: {} f:servicePort: {} f:serviceMonitor: .: {} f:additionalLabels: {} f:enabled: {} f:interval: {} f:metricRelabelings: {} f:namespace: {} f:relabelings: {} f:scheme: {} f:selector: {} f:tlsConfig: {} f:name: {} f:nodeSelector: {} f:pdb: .: {} f:annotations: {} f:enabled: {} f:labels: {} f:podAnnotations: {} f:podLabels: {} f:priorityClassName: {} f:readinessProbe: .: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:replicas: {} f:resources: {} f:service: .: {} f:annotations: {} f:labels: {} f:port: {} f:portName: {} f:serviceAccount: .: {} f:annotations: {} f:automountServiceAccountToken: {} f:create: {} f:name: {} f:tolerations: {} f:topologySpreadConstraints: {} f:volumeMounts: {} f:volumes: {} f:crds: .: {} f:annotations: {} f:install: {} f:keep: {} f:createAggregateRoles: {} f:dex: .: {} f:affinity: {} f:containerPortGrpc: {} f:containerPortHttp: {} f:containerPortMetrics: {} f:containerSecurityContext: {} f:enabled: {} f:env: {} f:envFrom: {} f:extraArgs: {} f:extraContainers: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:initContainers: {} f:initImage: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:livenessProbe: .: {} f:enabled: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:metrics: .: {} f:enabled: {} f:service: .: {} f:annotations: {} f:labels: {} f:portName: {} f:serviceMonitor: .: {} f:additionalLabels: {} f:enabled: {} f:interval: {} f:metricRelabelings: {} f:namespace: {} f:relabelings: {} f:scheme: {} f:selector: {} f:tlsConfig: {} f:name: {} f:nodeSelector: {} f:pdb: .: {} f:annotations: {} f:enabled: {} f:labels: {} f:podAnnotations: {} f:podLabels: {} f:priorityClassName: {} f:readinessProbe: .: {} f:enabled: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: {} f:serviceAccount: .: {} f:annotations: {} f:automountServiceAccountToken: {} f:create: {} f:name: {} f:servicePortGrpc: {} f:servicePortGrpcName: {} f:servicePortHttp: {} f:servicePortHttpName: {} f:servicePortMetrics: {} f:tolerations: {} f:topologySpreadConstraints: {} f:volumeMounts: {} f:volumes: {} f:externalRedis: .: {} f:existingSecret: {} f:host: {} f:password: {} f:port: {} f:secretAnnotations: {} f:extraObjects: {} f:fullnameOverride: {} f:global: .: {} f:additionalLabels: {} f:hostAliases: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:logging: .: {} f:format: {} f:level: {} f:networkPolicy: .: {} f:create: {} f:defaultDenyIngress: {} f:podAnnotations: {} f:podLabels: {} f:securityContext: {} f:kubeVersionOverride: {} f:nameOverride: {} f:notifications: .: {} f:affinity: {} f:argocdUrl: {} f:bots: .: {} f:slack: .: {} f:affinity: {} f:containerSecurityContext: {} f:enabled: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:nodeSelector: {} f:resources: {} f:securityContext: .: {} f:runAsNonRoot: {} f:service: .: {} f:annotations: {} f:port: {} f:type: {} f:serviceAccount: .: {} f:annotations: {} f:create: {} f:name: {} f:tolerations: {} f:updateStrategy: .: {} f:type: {} f:cm: .: {} f:create: {} f:containerSecurityContext: {} f:context: {} f:enabled: {} f:extraArgs: {} f:extraEnv: {} f:extraVolumeMounts: {} f:extraVolumes: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:logFormat: {} f:logLevel: {} f:metrics: .: {} f:enabled: {} f:port: {} f:service: .: {} f:annotations: {} f:labels: {} f:portName: {} f:serviceMonitor: .: {} f:additionalLabels: {} f:enabled: {} f:scheme: {} f:selector: {} f:tlsConfig: {} f:name: {} f:nodeSelector: {} f:notifiers: {} f:podAnnotations: {} f:podLabels: {} f:priorityClassName: {} f:resources: {} f:secret: .: {} f:annotations: {} f:create: {} f:items: {} f:securityContext: .: {} f:runAsNonRoot: {} f:serviceAccount: .: {} f:annotations: {} f:create: {} f:name: {} f:subscriptions: {} f:templates: {} f:tolerations: {} f:triggers: {} f:updateStrategy: .: {} f:type: {} f:openshift: .: {} f:enabled: {} f:redis: .: {} f:affinity: {} f:containerPort: {} f:containerSecurityContext: {} f:enabled: {} f:env: {} f:envFrom: {} f:extraArgs: {} f:extraContainers: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:initContainers: {} f:metrics: .: {} f:containerPort: {} f:enabled: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:resources: {} f:service: .: {} f:annotations: {} f:clusterIP: {} f:labels: {} f:portName: {} f:servicePort: {} f:type: {} f:serviceMonitor: .: {} f:additionalLabels: {} f:enabled: {} f:interval: {} f:metricRelabelings: {} f:namespace: {} f:relabelings: {} f:scheme: {} f:selector: {} f:tlsConfig: {} f:name: {} f:nodeSelector: {} f:pdb: .: {} f:annotations: {} f:enabled: {} f:labels: {} f:podAnnotations: {} f:podLabels: {} f:priorityClassName: {} f:resources: {} f:securityContext: .: {} f:runAsNonRoot: {} f:runAsUser: {} f:service: .: {} f:annotations: {} f:labels: {} f:serviceAccount: .: {} f:annotations: {} f:automountServiceAccountToken: {} f:create: {} f:name: {} f:servicePort: {} f:tolerations: {} f:topologySpreadConstraints: {} f:volumeMounts: {} f:volumes: {} f:redis-ha: .: {} f:additionalAffinities: {} f:affinity: {} f:auth: {} f:authKey: {} f:configmap: .: {} f:labels: {} f:configmapTest: .: {} f:image: .: {} f:repository: {} f:tag: {} f:resources: {} f:containerSecurityContext: .: {} f:allowPrivilegeEscalation: {} f:capabilities: .: {} f:drop: {} f:runAsNonRoot: {} f:runAsUser: {} f:seccompProfile: .: {} f:type: {} f:emptyDir: {} f:enabled: {} f:exporter: .: {} f:address: {} f:enabled: {} f:extraArgs: {} f:image: {} f:livenessProbe: .: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:timeoutSeconds: {} f:port: {} f:portName: {} f:pullPolicy: {} f:readinessProbe: .: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: {} f:scrapePath: {} f:serviceMonitor: .: {} f:enabled: {} f:tag: {} f:extraContainers: {} f:extraInitContainers: {} f:extraLabels: {} f:extraVolumes: {} f:global: .: {} f:additionalLabels: {} f:cattle: .: {} f:clusterId: {} f:clusterName: {} f:rkePathPrefix: {} f:rkeWindowsPathPrefix: {} f:systemDefaultRegistry: {} f:systemProjectId: {} f:url: {} f:hostAliases: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:logging: .: {} f:format: {} f:level: {} f:networkPolicy: .: {} f:create: {} f:defaultDenyIngress: {} f:podAnnotations: {} f:podLabels: {} f:securityContext: {} f:systemDefaultRegistry: {} f:haproxy: .: {} f:IPv6: .: {} f:enabled: {} f:additionalAffinities: {} f:affinity: {} f:annotations: {} f:checkFall: {} f:checkInterval: {} f:containerPort: {} f:containerSecurityContext: .: {} f:allowPrivilegeEscalation: {} f:capabilities: .: {} f:drop: {} f:runAsNonRoot: {} f:seccompProfile: .: {} f:type: {} f:emptyDir: {} f:enabled: {} f:hardAntiAffinity: {} f:image: .: {} f:pullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:init: .: {} f:resources: {} f:labels: {} f:lifecycle: {} f:metrics: .: {} f:enabled: {} f:port: {} f:portName: {} f:scrapePath: {} f:serviceMonitor: .: {} f:enabled: {} f:podDisruptionBudget: {} f:readOnly: .: {} f:enabled: {} f:port: {} f:replicas: {} f:resources: {} f:securityContext: .: {} f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:service: .: {} f:annotations: {} f:labels: {} f:loadBalancerIP: {} f:type: {} f:serviceAccount: .: {} f:create: {} f:serviceAccountName: {} f:servicePort: {} f:stickyBalancing: {} f:tests: .: {} f:resources: {} f:timeout: .: {} f:check: {} f:client: {} f:connect: {} f:server: {} f:tls: .: {} f:certMountPath: {} f:enabled: {} f:keyName: {} f:secretName: {} f:hardAntiAffinity: {} f:hostPath: .: {} f:chown: {} f:image: .: {} f:pullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:init: .: {} f:resources: {} f:labels: {} f:networkPolicy: .: {} f:annotations: {} f:egressRules: {} f:enabled: {} f:ingressRules: {} f:labels: {} f:nodeSelector: {} f:persistentVolume: .: {} f:accessModes: {} f:annotations: {} f:enabled: {} f:labels: {} f:size: {} f:podDisruptionBudget: {} f:podManagementPolicy: {} f:prometheusRule: .: {} f:additionalLabels: {} f:enabled: {} f:interval: {} f:namespace: {} f:rules: {} f:rbac: .: {} f:create: {} f:redis: .: {} f:annotations: {} f:config: .: {} f:maxmemory: {} f:maxmemory-policy: {} f:min-replicas-max-lag: {} f:min-replicas-to-write: {} f:rdbchecksum: {} f:rdbcompression: {} f:repl-diskless-sync: {} f:save: {} f:disableCommands: {} f:extraVolumeMounts: {} f:lifecycle: .: {} f:preStop: .: {} f:exec: .: {} f:command: {} f:livenessProbe: .: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:masterGroupName: {} f:port: {} f:readinessProbe: .: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: {} f:terminationGracePeriodSeconds: {} f:updateStrategy: .: {} f:type: {} f:replicas: {} f:restore: .: {} f:existingSecret: {} f:s3: .: {} f:access_key: {} f:region: {} f:secret_key: {} f:source: {} f:ssh: .: {} f:key: {} f:source: {} f:timeout: {} f:ro_replicas: {} f:securityContext: .: {} f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:sentinel: .: {} f:auth: {} f:authKey: {} f:config: .: {} f:down-after-milliseconds: {} f:failover-timeout: {} f:maxclients: {} f:parallel-syncs: {} f:extraVolumeMounts: {} f:lifecycle: {} f:livenessProbe: .: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:port: {} f:quorum: {} f:readinessProbe: .: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: {} f:serviceAccount: .: {} f:automountToken: {} f:create: {} f:serviceLabels: {} f:splitBrainDetection: .: {} f:interval: {} f:resources: {} f:sysctlImage: .: {} f:command: {} f:enabled: {} f:mountHostSys: {} f:pullPolicy: {} f:registry: {} f:repository: {} f:resources: {} f:tag: {} f:tls: .: {} f:caCertFile: {} f:certFile: {} f:keyFile: {} f:topologySpreadConstraints: .: {} f:enabled: {} f:maxSkew: {} f:topologyKey: {} f:whenUnsatisfiable: {} f:repoServer: .: {} f:affinity: {} f:autoscaling: .: {} f:behavior: {} f:enabled: {} f:maxReplicas: {} f:minReplicas: {} f:targetCPUUtilizationPercentage: {} f:targetMemoryUtilizationPercentage: {} f:clusterAdminAccess: .: {} f:enabled: {} f:clusterRoleRules: .: {} f:enabled: {} f:rules: {} f:containerPort: {} f:containerSecurityContext: {} f:copyutil: .: {} f:resources: {} f:env: {} f:envFrom: {} f:extraArgs: {} f:extraContainers: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:initContainers: {} f:livenessProbe: .: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:logFormat: {} f:logLevel: {} f:metrics: .: {} f:enabled: {} f:service: .: {} f:annotations: {} f:labels: {} f:portName: {} f:servicePort: {} f:serviceMonitor: .: {} f:additionalLabels: {} f:enabled: {} f:interval: {} f:metricRelabelings: {} f:namespace: {} f:relabelings: {} f:scheme: {} f:selector: {} f:tlsConfig: {} f:name: {} f:nodeSelector: {} f:pdb: .: {} f:annotations: {} f:enabled: {} f:labels: {} f:podAnnotations: {} f:podLabels: {} f:priorityClassName: {} f:rbac: {} f:readinessProbe: .: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:replicas: {} f:resources: {} f:service: .: {} f:annotations: {} f:labels: {} f:port: {} f:portName: {} f:serviceAccount: .: {} f:annotations: {} f:automountServiceAccountToken: {} f:create: {} f:name: {} f:tolerations: {} f:topologySpreadConstraints: {} f:volumeMounts: {} f:volumes: {} f:server: .: {} f:GKEbackendConfig: .: {} f:enabled: {} f:spec: {} f:GKEfrontendConfig: .: {} f:enabled: {} f:spec: {} f:GKEmanagedCertificate: .: {} f:domains: {} f:enabled: {} f:affinity: {} f:autoscaling: .: {} f:behavior: {} f:enabled: {} f:maxReplicas: {} f:minReplicas: {} f:targetCPUUtilizationPercentage: {} f:targetMemoryUtilizationPercentage: {} f:certificate: .: {} f:additionalHosts: {} f:domain: {} f:duration: {} f:enabled: {} f:issuer: .: {} f:group: {} f:kind: {} f:name: {} f:privateKey: .: {} f:algorithm: {} f:encoding: {} f:rotationPolicy: {} f:size: {} f:renewBefore: {} f:secretName: {} f:clusterAdminAccess: .: {} f:enabled: {} f:config: .: {} f:admin.enabled: {} f:application.instanceLabelKey: {} f:exec.enabled: {} f:server.rbac.log.enforce.enable: {} f:url: {} f:configAnnotations: {} f:configEnabled: {} f:containerPort: {} f:containerSecurityContext: {} f:env: {} f:envFrom: {} f:extensions: .: {} f:contents: {} f:enabled: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:resources: {} f:extraArgs: {} f:extraContainers: {} f:image: .: {} f:imagePullPolicy: {} f:repository: {} f:tag: {} f:imagePullSecrets: {} f:ingress: .: {} f:annotations: {} f:enabled: {} f:extraPaths: {} f:hosts: {} f:https: {} f:ingressClassName: {} f:labels: {} f:pathType: {} f:paths: {} f:tls: {} f:ingressGrpc: .: {} f:annotations: {} f:awsALB: .: {} f:backendProtocolVersion: {} f:serviceType: {} f:enabled: {} f:extraPaths: {} f:hosts: {} f:https: {} f:ingressClassName: {} f:isAWSALB: {} f:labels: {} f:pathType: {} f:paths: {} f:tls: {} f:initContainers: {} f:lifecycle: {} f:livenessProbe: .: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:logFormat: {} f:logLevel: {} f:metrics: .: {} f:enabled: {} f:service: .: {} f:annotations: {} f:labels: {} f:portName: {} f:servicePort: {} f:serviceMonitor: .: {} f:additionalLabels: {} f:enabled: {} f:interval: {} f:metricRelabelings: {} f:namespace: {} f:relabelings: {} f:scheme: {} f:selector: {} f:tlsConfig: {} f:name: {} f:nodeSelector: {} f:pdb: .: {} f:annotations: {} f:enabled: {} f:labels: {} f:podAnnotations: {} f:podLabels: {} f:priorityClassName: {} f:rbacConfig: {} f:rbacConfigAnnotations: {} f:rbacConfigCreate: {} f:readinessProbe: .: {} f:failureThreshold: {} f:initialDelaySeconds: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:replicas: {} f:resources: {} f:route: .: {} f:annotations: {} f:enabled: {} f:hostname: {} f:termination_policy: {} f:termination_type: {} f:service: .: {} f:annotations: {} f:externalIPs: {} f:externalTrafficPolicy: {} f:labels: {} f:loadBalancerIP: {} f:loadBalancerSourceRanges: {} f:namedTargetPort: {} f:nodePortHttp: {} f:nodePortHttps: {} f:servicePortHttp: {} f:servicePortHttpName: {} f:servicePortHttps: {} f:servicePortHttpsName: {} f:sessionAffinity: {} f:type: {} f:serviceAccount: .: {} f:annotations: {} f:automountServiceAccountToken: {} f:create: {} f:name: {} f:staticAssets: .: {} f:enabled: {} f:tolerations: {} f:topologySpreadConstraints: {} f:volumeMounts: {} f:volumes: {} f:helmVersion: {} f:info: .: {} f:description: {} f:firstDeployed: {} f:lastDeployed: {} f:notes: {} f:readme: {} f:status: {} f:name: {} f:namespace: {} f:resources: {} f:values: .: {} f:global: .: {} f:cattle: .: {} f:clusterId: {} f:clusterName: {} f:rkePathPrefix: {} f:rkeWindowsPathPrefix: {} f:systemDefaultRegistry: {} f:systemProjectId: {} f:url: {} f:systemDefaultRegistry: {} f:version: {} manager: agent operation: Update time: "2022-09-21T15:13:48Z" - apiVersion: catalog.cattle.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:observedGeneration: {} f:summary: .: {} f:state: {} manager: agent operation: Update subresource: status time: "2022-09-21T15:14:12Z" name: argocd namespace: argocd ownerReferences: - apiVersion: v1 blockOwnerDeletion: false controller: true kind: Secret name: sh.helm.release.v1.argocd.v3 uid: 6a163f19-5828-4e9a-b19e-172159d8ad37 resourceVersion: "11439507" uid: d0ad751f-3991-462a-9546-86971745f38d spec: chart: metadata: annotations: artifacthub.io/changes: | - "[Changed]: Cleanup of config files to keep them DRY" catalog.cattle.io/ui-source-repo: argo catalog.cattle.io/ui-source-repo-type: cluster apiVersion: v2 appVersion: v2.4.12 description: A Helm chart for Argo CD, a declarative, GitOps continuous delivery tool for Kubernetes. home: https://github.com/argoproj/argo-helm icon: https://argo-cd.readthedocs.io/en/stable/assets/logo.png keywords: - argoproj - argocd - gitops maintainers: - name: argoproj url: https://argoproj.github.io/ name: argo-cd version: 5.4.8 values: apiVersionOverrides: autoscaling: "" certmanager: "" ingress: "" applicationSet: affinity: {} args: debug: false dryRun: false enableLeaderElection: false metricsAddr: :8080 policy: sync probeBindAddr: :8081 enabled: true extraArgs: null extraContainers: null extraEnv: null extraEnvFrom: null extraVolumeMounts: null extraVolumes: null image: imagePullPolicy: "" repository: "" tag: "" imagePullSecrets: null logFormat: "" logLevel: "" metrics: enabled: false service: annotations: {} labels: {} portName: http-metrics servicePort: 8085 serviceMonitor: additionalLabels: {} enabled: false interval: 30s metricRelabelings: null namespace: "" relabelings: null scheme: "" selector: {} tlsConfig: {} name: applicationset-controller nodeSelector: {} podAnnotations: {} podLabels: {} podSecurityContext: {} priorityClassName: "" replicaCount: 1 resources: {} securityContext: {} service: annotations: {} labels: {} port: 7000 portName: webhook serviceAccount: annotations: {} create: true name: "" tolerations: null webhook: ingress: annotations: {} enabled: false extraPaths: null hosts: null ingressClassName: "" labels: {} pathType: Prefix paths: - /api/webhook tls: null configs: clusterCredentials: null credentialTemplates: {} credentialTemplatesAnnotations: {} gpgKeys: {} gpgKeysAnnotations: {} knownHosts: data: ssh_known_hosts: | bitbucket.org ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw== github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg= github.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOMqqnkVzrm0SdG6UOoqKLsabgH5C9okWi0dh2l9GKJl github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ== gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY= gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9 ssh.dev.azure.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H vs-ssh.visualstudio.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7Hr1oTWqNqOlzGJOfGJ4NakVyIzf1rXYd4d7wo6jBlkLvCA4odBlL0mDUyZ0/QUfTTqeu+tm22gOsv+VrVTMk6vwRU75gY/y9ut5Mb3bR5BV58dKXyq9A9UeB5Cakehn5Zgm6x1mKoVyf+FFn26iYqXJRgzIZZcZ5V6hrE0Qg39kZm4az48o0AUbf6Sp4SLdvnuMa2sVNwHBboS7EJkm57XQPVU3/QpyNLHbWDdzwtrlS+ez30S3AdYhLKEOxAG8weOnyrtLJAUen9mTkol8oII1edf7mWWbWVf0nBmly21+nZcmCTISQBtdcyPaEno7fFQMDD26/s0lfKob4Kw8H knownHostsAnnotations: {} repositories: {} repositoriesAnnotations: {} secret: annotations: {} argocdServerAdminPassword: "" argocdServerAdminPasswordMtime: "" argocdServerTlsConfig: {} bitbucketServerSecret: "" bitbucketUUID: "" createSecret: true extra: {} githubSecret: "" gitlabSecret: "" gogsSecret: "" styles: "" tlsCerts: {} tlsCertsAnnotations: {} controller: affinity: {} args: appHardResyncPeriod: "0" appResyncPeriod: "180" operationProcessors: "10" repoServerTimeoutSeconds: "60" selfHealTimeout: "5" statusProcessors: "20" clusterAdminAccess: enabled: true clusterRoleRules: enabled: false rules: null containerPort: 8082 containerSecurityContext: {} env: null envFrom: null extraArgs: null extraContainers: null image: imagePullPolicy: "" repository: "" tag: "" imagePullSecrets: null initContainers: null livenessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 logFormat: "" logLevel: "" metrics: applicationLabels: enabled: false labels: null enabled: false rules: enabled: false spec: null service: annotations: {} labels: {} portName: http-metrics servicePort: 8082 serviceMonitor: additionalLabels: {} enabled: false interval: 30s metricRelabelings: null namespace: "" relabelings: null scheme: "" selector: {} tlsConfig: {} name: application-controller nodeSelector: {} pdb: annotations: {} enabled: false labels: {} podAnnotations: {} podLabels: {} priorityClassName: "" readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 replicas: 1 resources: {} service: annotations: {} labels: {} port: 8082 portName: https-controller serviceAccount: annotations: {} automountServiceAccountToken: true create: true name: argocd-application-controller tolerations: null topologySpreadConstraints: null volumeMounts: null volumes: null crds: annotations: {} install: true keep: true createAggregateRoles: false dex: affinity: {} containerPortGrpc: 5557 containerPortHttp: 5556 containerPortMetrics: 5558 containerSecurityContext: {} enabled: true env: null envFrom: null extraArgs: null extraContainers: null image: imagePullPolicy: "" repository: ghcr.io/dexidp/dex tag: v2.32.0 imagePullSecrets: null initContainers: null initImage: imagePullPolicy: "" repository: "" tag: "" livenessProbe: enabled: false failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 metrics: enabled: false service: annotations: {} labels: {} portName: http-metrics serviceMonitor: additionalLabels: {} enabled: false interval: 30s metricRelabelings: null namespace: "" relabelings: null scheme: "" selector: {} tlsConfig: {} name: dex-server nodeSelector: {} pdb: annotations: {} enabled: false labels: {} podAnnotations: {} podLabels: {} priorityClassName: "" readinessProbe: enabled: false failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: {} serviceAccount: annotations: {} automountServiceAccountToken: true create: true name: argocd-dex-server servicePortGrpc: 5557 servicePortGrpcName: grpc servicePortHttp: 5556 servicePortHttpName: http servicePortMetrics: 5558 tolerations: null topologySpreadConstraints: null volumeMounts: - mountPath: /shared name: static-files volumes: - emptyDir: {} name: static-files externalRedis: existingSecret: "" host: "" password: "" port: 6379 secretAnnotations: {} extraObjects: null fullnameOverride: "" global: additionalLabels: {} hostAliases: null image: imagePullPolicy: IfNotPresent repository: quay.io/argoproj/argocd tag: "" imagePullSecrets: null logging: format: text level: info networkPolicy: create: false defaultDenyIngress: false podAnnotations: {} podLabels: {} securityContext: {} kubeVersionOverride: "" nameOverride: argocd notifications: affinity: {} argocdUrl: null bots: slack: affinity: {} containerSecurityContext: {} enabled: false image: imagePullPolicy: "" repository: "" tag: "" imagePullSecrets: null nodeSelector: {} resources: {} securityContext: runAsNonRoot: true service: annotations: {} port: 80 type: LoadBalancer serviceAccount: annotations: {} create: true name: argocd-notifications-bot tolerations: null updateStrategy: type: Recreate cm: create: true containerSecurityContext: {} context: {} enabled: true extraArgs: null extraEnv: null extraVolumeMounts: null extraVolumes: null image: imagePullPolicy: "" repository: "" tag: "" imagePullSecrets: null logFormat: "" logLevel: "" metrics: enabled: false port: 9001 service: annotations: {} labels: {} portName: http-metrics serviceMonitor: additionalLabels: {} enabled: false scheme: "" selector: {} tlsConfig: {} name: notifications-controller nodeSelector: {} notifiers: {} podAnnotations: {} podLabels: {} priorityClassName: "" resources: {} secret: annotations: {} create: true items: {} securityContext: runAsNonRoot: true serviceAccount: annotations: {} create: true name: argocd-notifications-controller subscriptions: null templates: {} tolerations: null triggers: {} updateStrategy: type: Recreate openshift: enabled: false redis: affinity: {} containerPort: 6379 containerSecurityContext: {} enabled: true env: null envFrom: null extraArgs: null extraContainers: null image: imagePullPolicy: IfNotPresent repository: public.ecr.aws/docker/library/redis tag: 7.0.4-alpine imagePullSecrets: null initContainers: null metrics: containerPort: 9121 enabled: false image: imagePullPolicy: IfNotPresent repository: public.ecr.aws/bitnami/redis-exporter tag: 1.26.0-debian-10-r2 resources: {} service: annotations: {} clusterIP: None labels: {} portName: http-metrics servicePort: 9121 type: ClusterIP serviceMonitor: additionalLabels: {} enabled: false interval: 30s metricRelabelings: null namespace: "" relabelings: null scheme: "" selector: {} tlsConfig: {} name: redis nodeSelector: {} pdb: annotations: {} enabled: false labels: {} podAnnotations: {} podLabels: {} priorityClassName: "" resources: {} securityContext: runAsNonRoot: true runAsUser: 999 service: annotations: {} labels: {} serviceAccount: annotations: {} automountServiceAccountToken: false create: false name: "" servicePort: 6379 tolerations: null topologySpreadConstraints: null volumeMounts: null volumes: null redis-ha: additionalAffinities: {} affinity: "" auth: false authKey: auth configmap: labels: {} configmapTest: image: repository: koalaman/shellcheck tag: v0.5.0 resources: {} containerSecurityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault emptyDir: {} enabled: false exporter: address: localhost enabled: true extraArgs: {} image: oliver006/redis_exporter livenessProbe: initialDelaySeconds: 15 periodSeconds: 15 timeoutSeconds: 3 port: 9121 portName: exporter-port pullPolicy: IfNotPresent readinessProbe: initialDelaySeconds: 15 periodSeconds: 15 successThreshold: 2 timeoutSeconds: 3 resources: {} scrapePath: /metrics serviceMonitor: enabled: false tag: v1.43.0 extraContainers: null extraInitContainers: null extraLabels: {} extraVolumes: null global: additionalLabels: {} cattle: clusterId: c-m-9sw4zt6m clusterName: miami rkePathPrefix: "" rkeWindowsPathPrefix: "" systemDefaultRegistry: "" systemProjectId: p-7z4lh url: https://rancher.labza hostAliases: null image: imagePullPolicy: IfNotPresent repository: quay.io/argoproj/argocd tag: "" imagePullSecrets: null logging: format: text level: info networkPolicy: create: false defaultDenyIngress: false podAnnotations: {} podLabels: {} securityContext: {} systemDefaultRegistry: "" haproxy: IPv6: enabled: true additionalAffinities: {} affinity: "" annotations: {} checkFall: 1 checkInterval: 1s containerPort: 6379 containerSecurityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault emptyDir: {} enabled: true hardAntiAffinity: true image: pullPolicy: IfNotPresent repository: haproxy tag: 2.6.4 imagePullSecrets: null init: resources: {} labels: {} lifecycle: {} metrics: enabled: true port: 9101 portName: http-exporter-port scrapePath: /metrics serviceMonitor: enabled: false podDisruptionBudget: {} readOnly: enabled: false port: 6380 replicas: 3 resources: {} securityContext: fsGroup: 99 runAsNonRoot: true runAsUser: 99 service: annotations: null labels: {} loadBalancerIP: null type: ClusterIP serviceAccount: create: true serviceAccountName: redis-sa servicePort: 6379 stickyBalancing: false tests: resources: {} timeout: check: 2s client: 330s connect: 4s server: 330s tls: certMountPath: /tmp/ enabled: false keyName: null secretName: "" hardAntiAffinity: true hostPath: chown: true image: pullPolicy: IfNotPresent repository: redis tag: 7.0.4-alpine imagePullSecrets: null init: resources: {} labels: {} networkPolicy: annotations: {} egressRules: null enabled: false ingressRules: null labels: {} nodeSelector: {} persistentVolume: accessModes: - ReadWriteOnce annotations: {} enabled: false labels: {} size: 10Gi podDisruptionBudget: {} podManagementPolicy: OrderedReady prometheusRule: additionalLabels: {} enabled: false interval: 10s namespace: null rules: null rbac: create: true redis: annotations: {} config: maxmemory: "0" maxmemory-policy: volatile-lru min-replicas-max-lag: 5 min-replicas-to-write: 1 rdbchecksum: "yes" rdbcompression: "yes" repl-diskless-sync: "yes" save: '""' disableCommands: - FLUSHDB - FLUSHALL extraVolumeMounts: null lifecycle: preStop: exec: command: - /bin/sh - /readonly-config/trigger-failover-if-master.sh livenessProbe: failureThreshold: 5 initialDelaySeconds: 30 periodSeconds: 15 successThreshold: 1 timeoutSeconds: 15 masterGroupName: argocd port: 6379 readinessProbe: failureThreshold: 5 initialDelaySeconds: 30 periodSeconds: 15 successThreshold: 1 timeoutSeconds: 15 resources: {} terminationGracePeriodSeconds: 60 updateStrategy: type: RollingUpdate replicas: 3 restore: existingSecret: false s3: access_key: "" region: "" secret_key: "" source: false ssh: key: "" source: false timeout: 600 ro_replicas: "" securityContext: fsGroup: 1000 runAsNonRoot: true runAsUser: 1000 sentinel: auth: false authKey: sentinel-password config: down-after-milliseconds: 10000 failover-timeout: 180000 maxclients: 10000 parallel-syncs: 5 extraVolumeMounts: null lifecycle: {} livenessProbe: failureThreshold: 5 initialDelaySeconds: 30 periodSeconds: 15 successThreshold: 1 timeoutSeconds: 15 port: 26379 quorum: 2 readinessProbe: failureThreshold: 5 initialDelaySeconds: 30 periodSeconds: 15 successThreshold: 3 timeoutSeconds: 15 resources: {} serviceAccount: automountToken: false create: true serviceLabels: {} splitBrainDetection: interval: 60 resources: {} sysctlImage: command: null enabled: false mountHostSys: false pullPolicy: Always registry: docker.io repository: busybox resources: {} tag: 1.34.1 tls: caCertFile: ca.crt certFile: redis.crt keyFile: redis.key topologySpreadConstraints: enabled: false maxSkew: "" topologyKey: "" whenUnsatisfiable: "" repoServer: affinity: {} autoscaling: behavior: {} enabled: false maxReplicas: 5 minReplicas: 1 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 clusterAdminAccess: enabled: false clusterRoleRules: enabled: false rules: null containerPort: 8081 containerSecurityContext: {} copyutil: resources: {} env: null envFrom: null extraArgs: null extraContainers: null image: imagePullPolicy: "" repository: "" tag: "" imagePullSecrets: null initContainers: null livenessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 logFormat: "" logLevel: "" metrics: enabled: false service: annotations: {} labels: {} portName: http-metrics servicePort: 8084 serviceMonitor: additionalLabels: {} enabled: false interval: 30s metricRelabelings: null namespace: "" relabelings: null scheme: "" selector: {} tlsConfig: {} name: repo-server nodeSelector: {} pdb: annotations: {} enabled: false labels: {} podAnnotations: {} podLabels: {} priorityClassName: "" rbac: null readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 replicas: 1 resources: {} service: annotations: {} labels: {} port: 8081 portName: https-repo-server serviceAccount: annotations: {} automountServiceAccountToken: true create: true name: "" tolerations: null topologySpreadConstraints: null volumeMounts: null volumes: null server: GKEbackendConfig: enabled: false spec: {} GKEfrontendConfig: enabled: false spec: {} GKEmanagedCertificate: domains: - argocd.example.com enabled: false affinity: {} autoscaling: behavior: {} enabled: false maxReplicas: 5 minReplicas: 1 targetCPUUtilizationPercentage: 50 targetMemoryUtilizationPercentage: 50 certificate: additionalHosts: null domain: argocd.example.com duration: "" enabled: false issuer: group: "" kind: "" name: "" privateKey: algorithm: RSA encoding: PKCS1 rotationPolicy: Never size: 2048 renewBefore: "" secretName: argocd-server-tls clusterAdminAccess: enabled: true config: admin.enabled: "true" application.instanceLabelKey: argocd.argoproj.io/instance exec.enabled: "false" server.rbac.log.enforce.enable: "false" url: "" configAnnotations: {} configEnabled: true containerPort: 8080 containerSecurityContext: {} env: null envFrom: null extensions: contents: null enabled: false image: imagePullPolicy: IfNotPresent repository: ghcr.io/argoproj-labs/argocd-extensions tag: v0.1.0 resources: {} extraArgs: null extraContainers: null image: imagePullPolicy: "" repository: "" tag: "" imagePullSecrets: null ingress: annotations: {} enabled: false extraPaths: null hosts: null https: false ingressClassName: "" labels: {} pathType: Prefix paths: - / tls: null ingressGrpc: annotations: {} awsALB: backendProtocolVersion: HTTP2 serviceType: NodePort enabled: false extraPaths: null hosts: null https: false ingressClassName: "" isAWSALB: false labels: {} pathType: Prefix paths: - / tls: null initContainers: null lifecycle: {} livenessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 logFormat: "" logLevel: "" metrics: enabled: false service: annotations: {} labels: {} portName: http-metrics servicePort: 8083 serviceMonitor: additionalLabels: {} enabled: false interval: 30s metricRelabelings: null namespace: "" relabelings: null scheme: "" selector: {} tlsConfig: {} name: server nodeSelector: {} pdb: annotations: {} enabled: false labels: {} podAnnotations: {} podLabels: {} priorityClassName: "" rbacConfig: {} rbacConfigAnnotations: {} rbacConfigCreate: true readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 replicas: 1 resources: {} route: annotations: {} enabled: false hostname: "" termination_policy: None termination_type: passthrough service: annotations: {} externalIPs: null externalTrafficPolicy: "" labels: {} loadBalancerIP: "" loadBalancerSourceRanges: null namedTargetPort: true nodePortHttp: 30080 nodePortHttps: 30443 servicePortHttp: 80 servicePortHttpName: http servicePortHttps: 443 servicePortHttpsName: https sessionAffinity: "" type: ClusterIP serviceAccount: annotations: {} automountServiceAccountToken: true create: true name: argocd-server staticAssets: enabled: true tolerations: null topologySpreadConstraints: null volumeMounts: null volumes: null helmVersion: 3 info: description: Upgrade complete firstDeployed: "2022-09-02T14:21:02Z" lastDeployed: "2022-09-21T15:13:46Z" notes: | In order to access the server UI you have the following options: 1. kubectl port-forward service/argocd-server -n argocd 8080:443 and then open the browser on http://localhost:8080 and accept the certificate 2. enable ingress in the values file `server.ingress.enabled` and either - Add the annotation for ssl passthrough: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/ingress.md#option-1-ssl-passthrough - Add the `--insecure` flag to `server.extraArgs` in the values file and terminate SSL at your ingress: https://github.com/argoproj/argo-cd/blob/master/docs/operator-manual/ingress.md#option-2-multiple-ingress-objects-and-hosts After reaching the UI the first time you can login with username: admin and the random password generated during the installation. You can find the password by running: kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d (You should delete the initial secret afterwards as suggested by the Getting Started Guide: https://github.com/argoproj/argo-cd/blob/master/docs/getting_started.md#4-login-using-the-cli) readme: | # Argo CD Chart A Helm chart for Argo CD, a declarative, GitOps continuous delivery tool for Kubernetes. Source code can be found [here](https://argo-cd.readthedocs.io/en/stable/) ## Additional Information This is a **community maintained** chart. This chart installs [argo-cd](https://argo-cd.readthedocs.io/en/stable/), a declarative, GitOps continuous delivery tool for Kubernetes. The default installation is intended to be similar to the provided Argo CD [releases](https://github.com/argoproj/argo-cd/releases). If you want to avoid including sensitive information unencrypted (clear text) in your version control, make use of the [declarative set up](https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/) of Argo CD. For instance, rather than adding repositories and their keys in your Helm values, you could deploy [SealedSecrets](https://github.com/bitnami-labs/sealed-secrets) with contents as seen in this [repositories section](https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#repositories) or any other secrets manager service (i.e. HashiCorp Vault, AWS/GCP Secrets Manager, etc.). ## High Availability This chart installs the non-HA version of Argo CD by default. If you want to run Argo CD in HA mode, you can use one of the example values in the next sections. Please also have a look into the upstream [Operator Manual regarding High Availability](https://argo-cd.readthedocs.io/en/stable/operator-manual/high_availability/) to understand how scaling of Argo CD works in detail. > **Warning:** > You need at least 3 worker nodes as the HA mode of redis enforces Pods to run on separate nodes. ### HA mode with autoscaling ```yaml redis-ha: enabled: true controller: replicas: 1 server: autoscaling: enabled: true minReplicas: 2 repoServer: autoscaling: enabled: true minReplicas: 2 applicationSet: replicas: 2 ``` ### HA mode without autoscaling ```yaml redis-ha: enabled: true controller: replicas: 1 server: replicas: 2 repoServer: replicas: 2 applicationSet: replicas: 2 ``` ### Synchronizing Changes from Original Repository In the original [Argo CD repository](https://github.com/argoproj/argo-cd/) an [`manifests/install.yaml`](https://github.com/argoproj/argo-cd/blob/master/manifests/install.yaml) is generated using `kustomize`. It's the basis for the installation as [described in the docs](https://argo-cd.readthedocs.io/en/stable/getting_started/#1-install-argo-cd). When installing Argo CD using this helm chart the user should have a similar experience and configuration rolled out. Hence, it makes sense to try to achieve a similar output of rendered `.yaml` resources when calling `helm template` using the default settings in `values.yaml`. To update the templates and default settings in `values.yaml` it may come in handy to look up the diff of the `manifests/install.yaml` between two versions accordingly. This can either be done directly via github and look for `manifests/install.yaml`: https://github.com/argoproj/argo-cd/compare/v1.8.7...v2.0.0#files_bucket Or you clone the repository and do a local `git-diff`: ```bash git clone https://github.com/argoproj/argo-cd.git cd argo-cd git diff v1.8.7 v2.0.0 -- manifests/install.yaml ``` Changes in the `CustomResourceDefinition` resources shall be fixed easily by copying 1:1 from the [`manifests/crds` folder](https://github.com/argoproj/argo-cd/tree/master/manifests/crds) into this [`charts/argo-cd/templates/crds` folder](https://github.com/argoproj/argo-helm/tree/master/charts/argo-cd/templates/crds). ## Upgrading ### Custom resource definitions Some users would prefer to install the CRDs _outside_ of the chart. You can disable the CRD installation of this chart by using `--set crds.install=false` when installing the chart. Helm cannot upgrade custom resource definitions [by design](https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#some-caveats-and-explanations). Please use `kubectl` to upgrade CRDs manually from [templates/crds](templates/crds/) folder or via the manifests from the upstream project repo: ```bash kubectl apply -k "https://github.com/argoproj/argo-cd/manifests/crds?ref=" # Eg. version v2.4.9 kubectl apply -k "https://github.com/argoproj/argo-cd/manifests/crds?ref=v2.4.9" ``` ### 5.2.0 Custom resource definitions were moved to `templates` folder so they can be managed by Helm. To adopt already created CRDs, please use following command: ```bash YOUR_ARGOCD_NAMESPACE="" # e.g. argo-cd YOUR_ARGOCD_RELEASENAME="" # e.g. argo-cd for crd in "applications.argoproj.io" "applicationsets.argoproj.io" "argocdextensions.argoproj.io" "appprojects.argoproj.io"; do kubectl label --overwrite crd $crd app.kubernetes.io/managed-by=Helm kubectl annotate --overwrite crd $crd meta.helm.sh/release-namespace="$YOUR_ARGOCD_NAMESPACE" kubectl annotate --overwrite crd $crd meta.helm.sh/release-name="$YOUR_ARGOCD_RELEASENAME" done ``` ### 5.0.0 This version **removes support for**: - deprecated repository credentials (parameter `configs.repositoryCredentials`) - option to run application controller as a Deployment - the parameters `server.additionalApplications` and `server.additionalProjects` Please carefully read the following section if you are using these parameters! In order to upgrade Applications and Projects safely against CRDs' upgrade, `server.additionalApplications` and `server.additionalProjects` are moved to [argocd-apps](../argocd-apps). If you are using `server.additionalApplications` or `server.additionalProjects`, you can adopt to [argocd-apps](../argocd-apps) as below: 1. Add [helm.sh/resource-policy annotation](https://helm.sh/docs/howto/charts_tips_and_tricks/#tell-helm-not-to-uninstall-a-resource) to avoid resources being removed by upgrading Helm chart You can keep your existing CRDs by adding `"helm.sh/resource-policy": keep` on `additionalAnnotations`, under `server.additionalApplications` and `server.additionalProjects` blocks, and running `helm upgrade`. e.g: ```yaml server: additionalApplications: - name: guestbook namespace: argocd additionalLabels: {} additionalAnnotations: "helm.sh/resource-policy": keep # <-- add this finalizers: - resources-finalizer.argocd.argoproj.io project: guestbook source: repoURL: https://github.com/argoproj/argocd-example-apps.git targetRevision: HEAD path: guestbook directory: recurse: true destination: server: https://kubernetes.default.svc namespace: guestbook syncPolicy: automated: prune: false selfHeal: false ignoreDifferences: - group: apps kind: Deployment jsonPointers: - /spec/replicas info: - name: url value: https://argoproj.github.io/ ``` You can also keep your existing CRDs by running the following scripts. ```bash # keep Applications for app in "guestbook"; do kubectl annotate --overwrite application $app helm.sh/resource-policy=keep done # keep Projects for project in "guestbook"; do kubectl annotate --overwrite appproject $project helm.sh/resource-policy=keep done ``` 2. Upgrade argo-cd Helm chart to v5.0.0 3. Remove keep [helm.sh/resource-policy annotation](https://helm.sh/docs/howto/charts_tips_and_tricks/#tell-helm-not-to-uninstall-a-resource) ```bash # delete annotations from Applications for app in "guestbook"; do kubectl annotate --overwrite application $app helm.sh/resource-policy- done # delete annotations from Projects for project in "guestbook"; do kubectl annotate --overwrite appproject $project helm.sh/resource-policy- done ``` 4. Adopt existing resources to [argocd-apps](../argocd-apps) ### 4.9.0 This version starts to use upstream image with applicationset binary. Start command was changed from `applicationset-controller` to `argocd-applicationset-controller` ### 4.3.* With this minor version, the notification notifier's `service.slack` is no longer configured by default. ### 4.0.0 and above This helm chart version deploys Argo CD v2.3. The Argo CD Notifications and ApplicationSet are part of Argo CD now. You no longer need to install them separately. The Notifications and ApplicationSet components **are bundled into default** Argo CD installation. Please read the [v2.2 to 2.3 upgrade instructions] in the upstream repository. ### 3.13.0 This release removes the flag `--staticassets` from argocd server as it has been dropped upstream. If this flag needs to be enabled e.g for older releases of Argo CD, it can be passed via the `server.extraArgs` field ### 3.10.2 Argo CD has recently deprecated the flag `--staticassets` and from chart version `3.10.2` has been disabled by default It can be re-enabled by setting `server.staticAssets.enabled` to true ### 3.8.1 This bugfix version potentially introduces a rename (and recreation) of one or more ServiceAccounts. It _only happens_ when you use one of these customization: ```yaml # Case 1) - only happens when you do not specify a custom name (repoServer.serviceAccount.name) repoServer: serviceAccount: create: true # Case 2) controller: serviceAccount: name: "" # or # Case 3) dex: serviceAccount: name: "" # or # Case 4) server: serviceAccount: name: "" # or ``` Please check if you are affected by one of these cases **before you upgrade**, especially when you use **cloud IAM roles for service accounts.** (eg. IRSA on AWS or Workload Identity for GKE) ### 3.2.* With this minor version we introduced the evaluation for the ingress manifest (depending on the capabilities version), See [Pull Request](https://github.com/argoproj/argo-helm/pull/637). [Issue 703](https://github.com/argoproj/argo-helm/issues/703) reported that the capabilities evaluation is **not handled correctly when deploying the chart via an Argo CD instance**, especially deploying on clusters running a cluster version prior to `1.19` (which misses `Ingress` on apiVersion `networking.k8s.io/v1`). If you are running a cluster version prior to `1.19` you can avoid this issue by directly installing chart version `3.6.0` and setting `kubeVersionOverride` like: ```yaml kubeVersionOverride: "1.18.0" ``` Then you should no longer encounter this issue. ### 3.0.0 and above Helm apiVersion switched to `v2`. Requires Helm `3.0.0` or above to install. [Read More](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/) on how to migrate your release from Helm 2 to Helm 3. ### 2.14.7 and above The `matchLabels` key in the Argo CD Application Controller is no longer hard-coded. Note that labels are immutable so caution should be exercised when making changes to this resource. ### 2.10.x to 2.11.0 The application controller is now available as a `StatefulSet` when the `controller.enableStatefulSet` flag is set to true. Depending on your Helm deployment this may be a downtime or breaking change if enabled when using HA and will become the default in 3.x. ### 1.8.7 to 2.x.x `controller.extraArgs`, `repoServer.extraArgs` and `server.extraArgs` are now arrays of strings instead of a map What was ```yaml server: extraArgs: insecure: "" ``` is now ```yaml server: extraArgs: - --insecure ``` ## Prerequisites - Kubernetes 1.7+ - Helm v3.0.0+ ## Installing the Chart To install the chart with the release name `my-release`: ```console $ helm repo add argo https://argoproj.github.io/argo-helm "argo" has been added to your repositories $ helm install my-release argo/argo-cd NAME: my-release ... ``` ## General parameters | Key | Type | Default | Description | |-----|------|---------|-------------| | apiVersionOverrides.autoscaling | string | `""` | String to override apiVersion of autoscaling rendered by this helm chart | | apiVersionOverrides.certmanager | string | `""` | String to override apiVersion of certmanager resources rendered by this helm chart | | apiVersionOverrides.ingress | string | `""` | String to override apiVersion of ingresses rendered by this helm chart | | crds.annotations | object | `{}` | Annotations to be added to all CRDs | | crds.install | bool | `true` | Install and upgrade CRDs | | crds.keep | bool | `true` | Keep CRDs on chart uninstall | | createAggregateRoles | bool | `false` | Create clusterroles that extend existing clusterroles to interact with argo-cd crds | | extraObjects | list | `[]` | Array of extra K8s manifests to deploy | | fullnameOverride | string | `""` | String to fully override `"argo-cd.fullname"` | | global.additionalLabels | object | `{}` | Additional labels to add to all resources | | global.hostAliases | list | `[]` | Mapping between IP and hostnames that will be injected as entries in the pod's hosts files | | global.image.imagePullPolicy | string | `"IfNotPresent"` | If defined, a imagePullPolicy applied to all Argo CD deployments | | global.image.repository | string | `"quay.io/argoproj/argocd"` | If defined, a repository applied to all Argo CD deployments | | global.image.tag | string | `""` | Overrides the global Argo CD image tag whose default is the chart appVersion | | global.imagePullSecrets | list | `[]` | If defined, uses a Secret to pull an image from a private Docker registry or repository | | global.logging.format | string | `"text"` | Set the global logging format. Either: `text` or `json` | | global.logging.level | string | `"info"` | Set the global logging level. One of: `debug`, `info`, `warn` or `error` | | global.networkPolicy.create | bool | `false` | Create NetworkPolicy objects for all components | | global.networkPolicy.defaultDenyIngress | bool | `false` | Default deny all ingress traffic | | global.podAnnotations | object | `{}` | Annotations for the all deployed pods | | global.podLabels | object | `{}` | Labels for the all deployed pods | | global.securityContext | object | `{}` | Toggle and define securityContext. See [values.yaml] | | kubeVersionOverride | string | `""` | Override the Kubernetes version, which is used to evaluate certain manifests | | nameOverride | string | `"argocd"` | Provide a name in place of `argocd` | | openshift.enabled | bool | `false` | enables using arbitrary uid for argo repo server | ## Argo CD Configs | Key | Type | Default | Description | |-----|------|---------|-------------| | configs.clusterCredentials | list | `[]` (See [values.yaml]) | Provide one or multiple [external cluster credentials] | | configs.credentialTemplates | object | `{}` | Repository credentials to be used as Templates for other repos | | configs.credentialTemplatesAnnotations | object | `{}` | Annotations to be added to `configs.credentialTemplates` Secret | | configs.gpgKeys | object | `{}` (See [values.yaml]) | [GnuPG](https://argo-cd.readthedocs.io/en/stable/user-guide/gpg-verification/) keys to add to the key ring | | configs.gpgKeysAnnotations | object | `{}` | GnuPG key ring annotations | | configs.knownHosts.data.ssh_known_hosts | string | See [values.yaml] | Known Hosts | | configs.knownHostsAnnotations | object | `{}` | Known Hosts configmap annotations | | configs.repositories | object | `{}` | Repositories list to be used by applications | | configs.repositoriesAnnotations | object | `{}` | Annotations to be added to `configs.repositories` Secret | | configs.secret.annotations | object | `{}` | Annotations to be added to argocd-secret | | configs.secret.argocdServerAdminPassword | string | `""` | Bcrypt hashed admin password | | configs.secret.argocdServerAdminPasswordMtime | string | `""` (defaults to current time) | Admin password modification time. Eg. `"2006-01-02T15:04:05Z"` | | configs.secret.argocdServerTlsConfig | object | `{}` | Argo TLS Data | | configs.secret.bitbucketServerSecret | string | `""` | Shared secret for authenticating BitbucketServer webhook events | | configs.secret.bitbucketUUID | string | `""` | UUID for authenticating Bitbucket webhook events | | configs.secret.createSecret | bool | `true` | Create the argocd-secret | | configs.secret.extra | object | `{}` | add additional secrets to be added to argocd-secret | | configs.secret.githubSecret | string | `""` | Shared secret for authenticating GitHub webhook events | | configs.secret.gitlabSecret | string | `""` | Shared secret for authenticating GitLab webhook events | | configs.secret.gogsSecret | string | `""` | Shared secret for authenticating Gogs webhook events | | configs.styles | string | `""` (See [values.yaml]) | Define custom [CSS styles] for your argo instance. This setting will automatically mount the provided CSS and reference it in the argo configuration. | | configs.tlsCerts | object | See [values.yaml] | TLS certificate | | configs.tlsCertsAnnotations | object | `{}` | TLS certificate configmap annotations | ## Argo CD Controller | Key | Type | Default | Description | |-----|------|---------|-------------| | controller.affinity | object | `{}` | Assign custom [affinity] rules to the deployment | | controller.args.appHardResyncPeriod | string | `"0"` | define the application controller `--app-hard-resync` | | controller.args.appResyncPeriod | string | `"180"` | define the application controller `--app-resync` | | controller.args.operationProcessors | string | `"10"` | define the application controller `--operation-processors` | | controller.args.repoServerTimeoutSeconds | string | `"60"` | define the application controller `--repo-server-timeout-seconds` | | controller.args.selfHealTimeout | string | `"5"` | define the application controller `--self-heal-timeout-seconds` | | controller.args.statusProcessors | string | `"20"` | define the application controller `--status-processors` | | controller.clusterAdminAccess.enabled | bool | `true` | Enable RBAC for local cluster deployments | | controller.clusterRoleRules.enabled | bool | `false` | Enable custom rules for the application controller's ClusterRole resource | | controller.clusterRoleRules.rules | list | `[]` | List of custom rules for the application controller's ClusterRole resource | | controller.containerPort | int | `8082` | Application controller listening port | | controller.containerSecurityContext | object | `{}` | Application controller container-level security context | | controller.env | list | `[]` | Environment variables to pass to application controller | | controller.envFrom | list | `[]` (See [values.yaml]) | envFrom to pass to application controller | | controller.extraArgs | list | `[]` | Additional command line arguments to pass to application controller | | controller.extraContainers | list | `[]` | Additional containers to be added to the application controller pod | | controller.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the application controller | | controller.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the application controller | | controller.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the application controller | | controller.imagePullSecrets | list | `[]` | Secrets with credentials to pull images from a private registry | | controller.initContainers | list | `[]` | Init containers to add to the application controller pod | | controller.livenessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | | controller.livenessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | | controller.livenessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | | controller.livenessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | | controller.livenessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | | controller.logFormat | string | `""` (defaults to global.logging.format) | Application controller log format. Either `text` or `json` | | controller.logLevel | string | `""` (defaults to global.logging.level) | Application controller log level. One of: `debug`, `info`, `warn` or `error` | | controller.metrics.applicationLabels.enabled | bool | `false` | Enables additional labels in argocd_app_labels metric | | controller.metrics.applicationLabels.labels | list | `[]` | Additional labels | | controller.metrics.enabled | bool | `false` | Deploy metrics service | | controller.metrics.rules.enabled | bool | `false` | Deploy a PrometheusRule for the application controller | | controller.metrics.rules.spec | list | `[]` | PrometheusRule.Spec for the application controller | | controller.metrics.service.annotations | object | `{}` | Metrics service annotations | | controller.metrics.service.labels | object | `{}` | Metrics service labels | | controller.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | | controller.metrics.service.servicePort | int | `8082` | Metrics service port | | controller.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | | controller.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | | controller.metrics.serviceMonitor.interval | string | `"30s"` | Prometheus ServiceMonitor interval | | controller.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | | controller.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | | controller.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | | controller.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | | controller.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | | controller.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | | controller.name | string | `"application-controller"` | Application controller name string | | controller.nodeSelector | object | `{}` | [Node selector] | | controller.pdb.annotations | object | `{}` | Annotations to be added to application controller pdb | | controller.pdb.enabled | bool | `false` | Deploy a Poddisruptionbudget for the application controller | | controller.pdb.labels | object | `{}` | Labels to be added to application controller pdb | | controller.podAnnotations | object | `{}` | Annotations to be added to application controller pods | | controller.podLabels | object | `{}` | Labels to be added to application controller pods | | controller.priorityClassName | string | `""` | Priority class for the application controller pods | | controller.readinessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | | controller.readinessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | | controller.readinessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | | controller.readinessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | | controller.readinessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | | controller.replicas | int | `1` | The number of application controller pods to run. Additional replicas will cause sharding of managed clusters across number of replicas. | | controller.resources | object | `{}` | Resource limits and requests for the application controller pods | | controller.service.annotations | object | `{}` | Application controller service annotations | | controller.service.labels | object | `{}` | Application controller service labels | | controller.service.port | int | `8082` | Application controller service port | | controller.service.portName | string | `"https-controller"` | Application controller service port name | | controller.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | | controller.serviceAccount.automountServiceAccountToken | bool | `true` | Automount API credentials for the Service Account | | controller.serviceAccount.create | bool | `true` | Create a service account for the application controller | | controller.serviceAccount.name | string | `"argocd-application-controller"` | Service account name | | controller.tolerations | list | `[]` | [Tolerations] for use with node taints | | controller.topologySpreadConstraints | list | `[]` | Assign custom [TopologySpreadConstraints] rules to the application controller | | controller.volumeMounts | list | `[]` | Additional volumeMounts to the application controller main container | | controller.volumes | list | `[]` | Additional volumes to the application controller pod | ## Argo Repo Server | Key | Type | Default | Description | |-----|------|---------|-------------| | repoServer.affinity | object | `{}` | Assign custom [affinity] rules to the deployment | | repoServer.autoscaling.behavior | object | `{}` | Configures the scaling behavior of the target in both Up and Down directions. This is only available on HPA apiVersion `autoscaling/v2beta2` and newer | | repoServer.autoscaling.enabled | bool | `false` | Enable Horizontal Pod Autoscaler ([HPA]) for the repo server | | repoServer.autoscaling.maxReplicas | int | `5` | Maximum number of replicas for the repo server [HPA] | | repoServer.autoscaling.minReplicas | int | `1` | Minimum number of replicas for the repo server [HPA] | | repoServer.autoscaling.targetCPUUtilizationPercentage | int | `50` | Average CPU utilization percentage for the repo server [HPA] | | repoServer.autoscaling.targetMemoryUtilizationPercentage | int | `50` | Average memory utilization percentage for the repo server [HPA] | | repoServer.clusterAdminAccess.enabled | bool | `false` | Enable RBAC for local cluster deployments | | repoServer.clusterRoleRules.enabled | bool | `false` | Enable custom rules for the Repo server's Cluster Role resource | | repoServer.clusterRoleRules.rules | list | `[]` | List of custom rules for the Repo server's Cluster Role resource | | repoServer.containerPort | int | `8081` | Configures the repo server port | | repoServer.containerSecurityContext | object | `{}` | Repo server container-level security context | | repoServer.copyutil.resources | object | `{}` | Resource limits and requests for the copyutil initContainer | | repoServer.env | list | `[]` | Environment variables to pass to repo server | | repoServer.envFrom | list | `[]` (See [values.yaml]) | envFrom to pass to repo server | | repoServer.extraArgs | list | `[]` | Additional command line arguments to pass to repo server | | repoServer.extraContainers | list | `[]` | Additional containers to be added to the repo server pod | | repoServer.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the repo server | | repoServer.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the repo server | | repoServer.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the repo server | | repoServer.imagePullSecrets | list | `[]` | Secrets with credentials to pull images from a private registry | | repoServer.initContainers | list | `[]` | Init containers to add to the repo server pods | | repoServer.livenessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | | repoServer.livenessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | | repoServer.livenessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | | repoServer.livenessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | | repoServer.livenessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | | repoServer.logFormat | string | `""` (defaults to global.logging.level) | Repo server log format: Either `text` or `json` | | repoServer.logLevel | string | `""` (defaults to global.logging.format) | Repo server log level. One of: `debug`, `info`, `warn` or `error` | | repoServer.metrics.enabled | bool | `false` | Deploy metrics service | | repoServer.metrics.service.annotations | object | `{}` | Metrics service annotations | | repoServer.metrics.service.labels | object | `{}` | Metrics service labels | | repoServer.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | | repoServer.metrics.service.servicePort | int | `8084` | Metrics service port | | repoServer.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | | repoServer.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | | repoServer.metrics.serviceMonitor.interval | string | `"30s"` | Prometheus ServiceMonitor interval | | repoServer.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | | repoServer.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | | repoServer.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | | repoServer.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | | repoServer.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | | repoServer.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | | repoServer.name | string | `"repo-server"` | Repo server name | | repoServer.nodeSelector | object | `{}` | [Node selector] | | repoServer.pdb.annotations | object | `{}` | Annotations to be added to Repo server pdb | | repoServer.pdb.enabled | bool | `false` | Deploy a Poddisruptionbudget for the Repo server | | repoServer.pdb.labels | object | `{}` | Labels to be added to Repo server pdb | | repoServer.podAnnotations | object | `{}` | Annotations to be added to repo server pods | | repoServer.podLabels | object | `{}` | Labels to be added to repo server pods | | repoServer.priorityClassName | string | `""` | Priority class for the repo server | | repoServer.rbac | list | `[]` | Repo server rbac rules | | repoServer.readinessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | | repoServer.readinessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | | repoServer.readinessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | | repoServer.readinessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | | repoServer.readinessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | | repoServer.replicas | int | `1` | The number of repo server pods to run | | repoServer.resources | object | `{}` | Resource limits and requests for the repo server pods | | repoServer.service.annotations | object | `{}` | Repo server service annotations | | repoServer.service.labels | object | `{}` | Repo server service labels | | repoServer.service.port | int | `8081` | Repo server service port | | repoServer.service.portName | string | `"https-repo-server"` | Repo server service port name | | repoServer.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | | repoServer.serviceAccount.automountServiceAccountToken | bool | `true` | Automount API credentials for the Service Account | | repoServer.serviceAccount.create | bool | `true` | Create repo server service account | | repoServer.serviceAccount.name | string | `""` | Repo server service account name | | repoServer.tolerations | list | `[]` | [Tolerations] for use with node taints | | repoServer.topologySpreadConstraints | list | `[]` | Assign custom [TopologySpreadConstraints] rules to the repo server | | repoServer.volumeMounts | list | `[]` | Additional volumeMounts to the repo server main container | | repoServer.volumes | list | `[]` | Additional volumes to the repo server pod | ## Argo Server | Key | Type | Default | Description | |-----|------|---------|-------------| | server.GKEbackendConfig.enabled | bool | `false` | Enable BackendConfig custom resource for Google Kubernetes Engine | | server.GKEbackendConfig.spec | object | `{}` | [BackendConfigSpec] | | server.GKEfrontendConfig.enabled | bool | `false` | Enable FrontConfig custom resource for Google Kubernetes Engine | | server.GKEfrontendConfig.spec | object | `{}` | [FrontendConfigSpec] | | server.GKEmanagedCertificate.domains | list | `["argocd.example.com"]` | Domains for the Google Managed Certificate | | server.GKEmanagedCertificate.enabled | bool | `false` | Enable ManagedCertificate custom resource for Google Kubernetes Engine. | | server.affinity | object | `{}` | Assign custom [affinity] rules to the deployment | | server.autoscaling.behavior | object | `{}` | Configures the scaling behavior of the target in both Up and Down directions. This is only available on HPA apiVersion `autoscaling/v2beta2` and newer | | server.autoscaling.enabled | bool | `false` | Enable Horizontal Pod Autoscaler ([HPA]) for the Argo CD server | | server.autoscaling.maxReplicas | int | `5` | Maximum number of replicas for the Argo CD server [HPA] | | server.autoscaling.minReplicas | int | `1` | Minimum number of replicas for the Argo CD server [HPA] | | server.autoscaling.targetCPUUtilizationPercentage | int | `50` | Average CPU utilization percentage for the Argo CD server [HPA] | | server.autoscaling.targetMemoryUtilizationPercentage | int | `50` | Average memory utilization percentage for the Argo CD server [HPA] | | server.certificate.additionalHosts | list | `[]` | Certificate manager additional hosts | | server.certificate.domain | string | `"argocd.example.com"` | Certificate primary domain (commonName) | | server.certificate.duration | string | `""` | The requested 'duration' (i.e. lifetime) of the Certificate. Value must be in units accepted by Go time.ParseDuration | | server.certificate.enabled | bool | `false` | Deploy a Certificate resource (requires cert-manager) | | server.certificate.issuer.group | string | `""` | Certificate issuer group. Set if using an external issuer. Eg. `cert-manager.io` | | server.certificate.issuer.kind | string | `""` | Certificate issuer kind. Either `Issuer` or `ClusterIssuer` | | server.certificate.issuer.name | string | `""` | Certificate isser name. Eg. `letsencrypt` | | server.certificate.privateKey.algorithm | string | `"RSA"` | Algorithm used to generate certificate private key. One of: `RSA`, `Ed25519` or `ECDSA` | | server.certificate.privateKey.encoding | string | `"PKCS1"` | The private key cryptography standards (PKCS) encoding for private key. Either: `PCKS1` or `PKCS8` | | server.certificate.privateKey.rotationPolicy | string | `"Never"` | Rotation policy of private key when certificate is re-issued. Either: `Never` or `Always` | | server.certificate.privateKey.size | int | `2048` | Key bit size of the private key. If algorithm is set to `Ed25519`, size is ignored. | | server.certificate.renewBefore | string | `""` | How long before the currently issued certificate's expiry cert-manager should renew the certificate. Value must be in units accepted by Go time.ParseDuration | | server.certificate.secretName | string | `"argocd-server-tls"` | The name of the Secret that will be automatically created and managed by this Certificate resource | | server.clusterAdminAccess.enabled | bool | `true` | Enable RBAC for local cluster deployments | | server.config | object | See [values.yaml] | [General Argo CD configuration] | | server.configAnnotations | object | `{}` | Annotations to be added to Argo CD ConfigMap | | server.configEnabled | bool | `true` | Manage Argo CD configmap (Declarative Setup) | | server.containerPort | int | `8080` | Configures the server port | | server.containerSecurityContext | object | `{}` | Servers container-level security context | | server.env | list | `[]` | Environment variables to pass to Argo CD server | | server.envFrom | list | `[]` (See [values.yaml]) | envFrom to pass to Argo CD server | | server.extensions.contents | list | `[]` | Extensions to be loaded into the server | | server.extensions.enabled | bool | `false` | Enable support for extensions | | server.extensions.image.imagePullPolicy | string | `"IfNotPresent"` | Image pull policy for extensions | | server.extensions.image.repository | string | `"ghcr.io/argoproj-labs/argocd-extensions"` | Repository to use for extensions image | | server.extensions.image.tag | string | `"v0.1.0"` | Tag to use for extensions image | | server.extensions.resources | object | `{}` | Resource limits and requests for the argocd-extensions container | | server.extraArgs | list | `[]` | Additional command line arguments to pass to Argo CD server | | server.extraContainers | list | `[]` | Additional containers to be added to the server pod | | server.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the Argo CD server | | server.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the Argo CD server | | server.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the Argo CD server | | server.imagePullSecrets | list | `[]` | Secrets with credentials to pull images from a private registry | | server.ingress.annotations | object | `{}` | Additional ingress annotations | | server.ingress.enabled | bool | `false` | Enable an ingress resource for the Argo CD server | | server.ingress.extraPaths | list | `[]` | Additional ingress paths | | server.ingress.hosts | list | `[]` | List of ingress hosts | | server.ingress.https | bool | `false` | Uses `server.service.servicePortHttps` instead `server.service.servicePortHttp` | | server.ingress.ingressClassName | string | `""` | Defines which ingress controller will implement the resource | | server.ingress.labels | object | `{}` | Additional ingress labels | | server.ingress.pathType | string | `"Prefix"` | Ingress path type. One of `Exact`, `Prefix` or `ImplementationSpecific` | | server.ingress.paths | list | `["/"]` | List of ingress paths | | server.ingress.tls | list | `[]` | Ingress TLS configuration | | server.ingressGrpc.annotations | object | `{}` | Additional ingress annotations for dedicated [gRPC-ingress] | | server.ingressGrpc.awsALB.backendProtocolVersion | string | `"HTTP2"` | Backend protocol version for the AWS ALB gRPC service | | server.ingressGrpc.awsALB.serviceType | string | `"NodePort"` | Service type for the AWS ALB gRPC service | | server.ingressGrpc.enabled | bool | `false` | Enable an ingress resource for the Argo CD server for dedicated [gRPC-ingress] | | server.ingressGrpc.extraPaths | list | `[]` | Additional ingress paths for dedicated [gRPC-ingress] | | server.ingressGrpc.hosts | list | `[]` | List of ingress hosts for dedicated [gRPC-ingress] | | server.ingressGrpc.https | bool | `false` | Uses `server.service.servicePortHttps` instead `server.service.servicePortHttp` | | server.ingressGrpc.ingressClassName | string | `""` | Defines which ingress controller will implement the resource [gRPC-ingress] | | server.ingressGrpc.isAWSALB | bool | `false` | Setup up gRPC ingress to work with an AWS ALB | | server.ingressGrpc.labels | object | `{}` | Additional ingress labels for dedicated [gRPC-ingress] | | server.ingressGrpc.pathType | string | `"Prefix"` | Ingress path type for dedicated [gRPC-ingress]. One of `Exact`, `Prefix` or `ImplementationSpecific` | | server.ingressGrpc.paths | list | `["/"]` | List of ingress paths for dedicated [gRPC-ingress] | | server.ingressGrpc.tls | list | `[]` | Ingress TLS configuration for dedicated [gRPC-ingress] | | server.initContainers | list | `[]` | Init containers to add to the server pod | | server.lifecycle | object | `{}` | Specify postStart and preStop lifecycle hooks for your argo-cd-server container | | server.livenessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | | server.livenessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | | server.livenessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | | server.livenessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | | server.livenessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | | server.logFormat | string | `""` (defaults to global.logging.format) | Argo CD server log format: Either `text` or `json` | | server.logLevel | string | `""` (defaults to global.logging.level) | Argo CD server log level. One of: `debug`, `info`, `warn` or `error` | | server.metrics.enabled | bool | `false` | Deploy metrics service | | server.metrics.service.annotations | object | `{}` | Metrics service annotations | | server.metrics.service.labels | object | `{}` | Metrics service labels | | server.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | | server.metrics.service.servicePort | int | `8083` | Metrics service port | | server.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | | server.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | | server.metrics.serviceMonitor.interval | string | `"30s"` | Prometheus ServiceMonitor interval | | server.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | | server.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | | server.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | | server.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | | server.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | | server.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | | server.name | string | `"server"` | Argo CD server name | | server.nodeSelector | object | `{}` | [Node selector] | | server.pdb.annotations | object | `{}` | Annotations to be added to server pdb | | server.pdb.enabled | bool | `false` | Deploy a Poddisruptionbudget for the server | | server.pdb.labels | object | `{}` | Labels to be added to server pdb | | server.podAnnotations | object | `{}` | Annotations to be added to server pods | | server.podLabels | object | `{}` | Labels to be added to server pods | | server.priorityClassName | string | `""` | Priority class for the Argo CD server | | server.rbacConfig | object | `{}` | Argo CD rbac config ([Argo CD RBAC policy]) | | server.rbacConfigAnnotations | object | `{}` | Annotations to be added to Argo CD rbac ConfigMap | | server.rbacConfigCreate | bool | `true` | Whether or not to create the configmap. If false, it is expected the configmap will be created by something else. Argo CD will not work if there is no configMap created with the name above. | | server.readinessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | | server.readinessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | | server.readinessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | | server.readinessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | | server.readinessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | | server.replicas | int | `1` | The number of server pods to run | | server.resources | object | `{}` | Resource limits and requests for the Argo CD server | | server.route.annotations | object | `{}` | Openshift Route annotations | | server.route.enabled | bool | `false` | Enable an OpenShift Route for the Argo CD server | | server.route.hostname | string | `""` | Hostname of OpenShift Route | | server.route.termination_policy | string | `"None"` | Termination policy of Openshift Route | | server.route.termination_type | string | `"passthrough"` | Termination type of Openshift Route | | server.service.annotations | object | `{}` | Server service annotations | | server.service.externalIPs | list | `[]` | Server service external IPs | | server.service.externalTrafficPolicy | string | `""` | Denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints | | server.service.labels | object | `{}` | Server service labels | | server.service.loadBalancerIP | string | `""` | LoadBalancer will get created with the IP specified in this field | | server.service.loadBalancerSourceRanges | list | `[]` | Source IP ranges to allow access to service from | | server.service.namedTargetPort | bool | `true` | Use named target port for argocd | | server.service.nodePortHttp | int | `30080` | Server service http port for NodePort service type (only if `server.service.type` is set to "NodePort") | | server.service.nodePortHttps | int | `30443` | Server service https port for NodePort service type (only if `server.service.type` is set to "NodePort") | | server.service.servicePortHttp | int | `80` | Server service http port | | server.service.servicePortHttpName | string | `"http"` | Server service http port name, can be used to route traffic via istio | | server.service.servicePortHttps | int | `443` | Server service https port | | server.service.servicePortHttpsName | string | `"https"` | Server service https port name, can be used to route traffic via istio | | server.service.sessionAffinity | string | `""` | Used to maintain session affinity. Supports `ClientIP` and `None` | | server.service.type | string | `"ClusterIP"` | Server service type | | server.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | | server.serviceAccount.automountServiceAccountToken | bool | `true` | Automount API credentials for the Service Account | | server.serviceAccount.create | bool | `true` | Create server service account | | server.serviceAccount.name | string | `"argocd-server"` | Server service account name | | server.staticAssets.enabled | bool | `true` | Disable deprecated flag `--staticassets` | | server.tolerations | list | `[]` | [Tolerations] for use with node taints | | server.topologySpreadConstraints | list | `[]` | Assign custom [TopologySpreadConstraints] rules to the Argo CD server | | server.volumeMounts | list | `[]` | Additional volumeMounts to the server main container | | server.volumes | list | `[]` | Additional volumes to the server pod | ## Dex | Key | Type | Default | Description | |-----|------|---------|-------------| | dex.affinity | object | `{}` | Assign custom [affinity] rules to the deployment | | dex.containerPortGrpc | int | `5557` | Container port for gRPC access | | dex.containerPortHttp | int | `5556` | Container port for HTTP access | | dex.containerPortMetrics | int | `5558` | Container port for metrics access | | dex.containerSecurityContext | object | `{}` | Dex container-level security context | | dex.enabled | bool | `true` | Enable dex | | dex.env | list | `[]` | Environment variables to pass to the Dex server | | dex.envFrom | list | `[]` (See [values.yaml]) | envFrom to pass to the Dex server | | dex.extraArgs | list | `[]` | Additional command line arguments to pass to the Dex server | | dex.extraContainers | list | `[]` | Additional containers to be added to the dex pod | | dex.extraVolumeMounts | list | `[]` | Extra volumeMounts to the dex pod | | dex.extraVolumes | list | `[]` | Extra volumes to the dex pod | | dex.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Dex imagePullPolicy | | dex.image.repository | string | `"ghcr.io/dexidp/dex"` | Dex image repository | | dex.image.tag | string | `"v2.32.0"` | Dex image tag | | dex.imagePullSecrets | list | `[]` | Secrets with credentials to pull images from a private registry | | dex.initContainers | list | `[]` | Init containers to add to the dex pod | | dex.initImage.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Argo CD init image imagePullPolicy | | dex.initImage.repository | string | `""` (defaults to global.image.repository) | Argo CD init image repository | | dex.initImage.tag | string | `""` (defaults to global.image.tag) | Argo CD init image tag | | dex.livenessProbe.enabled | bool | `false` | Enable Kubernetes liveness probe for Dex >= 2.28.0 | | dex.livenessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | | dex.livenessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | | dex.livenessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | | dex.livenessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | | dex.livenessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | | dex.metrics.enabled | bool | `false` | Deploy metrics service | | dex.metrics.service.annotations | object | `{}` | Metrics service annotations | | dex.metrics.service.labels | object | `{}` | Metrics service labels | | dex.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | | dex.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | | dex.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | | dex.metrics.serviceMonitor.interval | string | `"30s"` | Prometheus ServiceMonitor interval | | dex.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | | dex.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | | dex.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | | dex.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | | dex.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | | dex.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | | dex.name | string | `"dex-server"` | Dex name | | dex.nodeSelector | object | `{}` | [Node selector] | | dex.pdb.annotations | object | `{}` | Annotations to be added to Dex server pdb | | dex.pdb.enabled | bool | `false` | Deploy a Poddisruptionbudget for the Dex server | | dex.pdb.labels | object | `{}` | Labels to be added to Dex server pdb | | dex.podAnnotations | object | `{}` | Annotations to be added to the Dex server pods | | dex.podLabels | object | `{}` | Labels to be added to the Dex server pods | | dex.priorityClassName | string | `""` | Priority class for dex | | dex.readinessProbe.enabled | bool | `false` | Enable Kubernetes readiness probe for Dex >= 2.28.0 | | dex.readinessProbe.failureThreshold | int | `3` | Minimum consecutive failures for the [probe] to be considered failed after having succeeded | | dex.readinessProbe.initialDelaySeconds | int | `10` | Number of seconds after the container has started before [probe] is initiated | | dex.readinessProbe.periodSeconds | int | `10` | How often (in seconds) to perform the [probe] | | dex.readinessProbe.successThreshold | int | `1` | Minimum consecutive successes for the [probe] to be considered successful after having failed | | dex.readinessProbe.timeoutSeconds | int | `1` | Number of seconds after which the [probe] times out | | dex.resources | object | `{}` | Resource limits and requests for dex | | dex.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | | dex.serviceAccount.automountServiceAccountToken | bool | `true` | Automount API credentials for the Service Account | | dex.serviceAccount.create | bool | `true` | Create dex service account | | dex.serviceAccount.name | string | `"argocd-dex-server"` | Dex service account name | | dex.servicePortGrpc | int | `5557` | Service port for gRPC access | | dex.servicePortGrpcName | string | `"grpc"` | Service port name for gRPC access | | dex.servicePortHttp | int | `5556` | Service port for HTTP access | | dex.servicePortHttpName | string | `"http"` | Service port name for HTTP access | | dex.servicePortMetrics | int | `5558` | Service port for metrics access | | dex.tolerations | list | `[]` | [Tolerations] for use with node taints | | dex.topologySpreadConstraints | list | `[]` | Assign custom [TopologySpreadConstraints] rules to dex | | dex.volumeMounts | list | `[{"mountPath":"/shared","name":"static-files"}]` | Additional volumeMounts to the dex main container | | dex.volumes | list | `[{"emptyDir":{},"name":"static-files"}]` | Additional volumes to the dex pod | ## Redis ### Option 1 - Single Redis instance (default option) | Key | Type | Default | Description | |-----|------|---------|-------------| | redis.affinity | object | `{}` | Assign custom [affinity] rules to the deployment | | redis.containerPort | int | `6379` | Redis container port | | redis.containerSecurityContext | object | `{}` | Redis container-level security context | | redis.enabled | bool | `true` | Enable redis | | redis.env | list | `[]` | Environment variables to pass to the Redis server | | redis.envFrom | list | `[]` (See [values.yaml]) | envFrom to pass to the Redis server | | redis.extraArgs | list | `[]` | Additional command line arguments to pass to redis-server | | redis.extraContainers | list | `[]` | Additional containers to be added to the redis pod | | redis.image.imagePullPolicy | string | `"IfNotPresent"` | Redis imagePullPolicy | | redis.image.repository | string | `"public.ecr.aws/docker/library/redis"` | Redis repository | | redis.image.tag | string | `"7.0.4-alpine"` | Redis tag | | redis.imagePullSecrets | list | `[]` | Secrets with credentials to pull images from a private registry | | redis.initContainers | list | `[]` | Init containers to add to the redis pod | | redis.metrics.containerPort | int | `9121` | Port to use for redis-exporter sidecar | | redis.metrics.enabled | bool | `false` | Deploy metrics service and redis-exporter sidecar | | redis.metrics.image.imagePullPolicy | string | `"IfNotPresent"` | redis-exporter image PullPolicy | | redis.metrics.image.repository | string | `"public.ecr.aws/bitnami/redis-exporter"` | redis-exporter image repository | | redis.metrics.image.tag | string | `"1.26.0-debian-10-r2"` | redis-exporter image tag | | redis.metrics.resources | object | `{}` | Resource limits and requests for redis-exporter sidecar | | redis.metrics.service.annotations | object | `{}` | Metrics service annotations | | redis.metrics.service.clusterIP | string | `"None"` | Metrics service clusterIP. `None` makes a "headless service" (no virtual IP) | | redis.metrics.service.labels | object | `{}` | Metrics service labels | | redis.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | | redis.metrics.service.servicePort | int | `9121` | Metrics service port | | redis.metrics.service.type | string | `"ClusterIP"` | Metrics service type | | redis.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | | redis.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | | redis.metrics.serviceMonitor.interval | string | `"30s"` | Interval at which metrics should be scraped | | redis.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | | redis.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | | redis.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | | redis.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | | redis.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | | redis.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | | redis.name | string | `"redis"` | Redis name | | redis.nodeSelector | object | `{}` | [Node selector] | | redis.pdb.annotations | object | `{}` | Annotations to be added to Redis server pdb | | redis.pdb.enabled | bool | `false` | Deploy a Poddisruptionbudget for the Redis server | | redis.pdb.labels | object | `{}` | Labels to be added to Redis server pdb | | redis.podAnnotations | object | `{}` | Annotations to be added to the Redis server pods | | redis.podLabels | object | `{}` | Labels to be added to the Redis server pods | | redis.priorityClassName | string | `""` | Priority class for redis | | redis.resources | object | `{}` | Resource limits and requests for redis | | redis.securityContext | object | `{"runAsNonRoot":true,"runAsUser":999}` | Redis pod-level security context | | redis.service.annotations | object | `{}` | Redis service annotations | | redis.service.labels | object | `{}` | Additional redis service labels | | redis.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | | redis.serviceAccount.automountServiceAccountToken | bool | `false` | Automount API credentials for the Service Account | | redis.serviceAccount.create | bool | `false` | Create a service account for the redis pod | | redis.serviceAccount.name | string | `""` | Service account name for redis pod | | redis.servicePort | int | `6379` | Redis service port | | redis.tolerations | list | `[]` | [Tolerations] for use with node taints | | redis.topologySpreadConstraints | list | `[]` | Assign custom [TopologySpreadConstraints] rules to redis | | redis.volumeMounts | list | `[]` | Additional volumeMounts to the redis container | | redis.volumes | list | `[]` | Additional volumes to the redis pod | ### Option 2 - Redis HA This option uses the following third-party chart to bootstrap a clustered Redis: https://github.com/DandyDeveloper/charts/tree/master/charts/redis-ha. For all available configuration options, please read upstream README and/or chart source. The main options are listed here: | Key | Type | Default | Description | |-----|------|---------|-------------| | redis-ha.enabled | bool | `false` | Enables the Redis HA subchart and disables the custom Redis single node deployment | | redis-ha.exporter.enabled | bool | `true` | If `true`, the prometheus exporter sidecar is enabled | | redis-ha.haproxy.enabled | bool | `true` | Enabled HAProxy LoadBalancing/Proxy | | redis-ha.haproxy.metrics.enabled | bool | `true` | HAProxy enable prometheus metric scraping | | redis-ha.image.tag | string | `"7.0.4-alpine"` | Redis tag | | redis-ha.persistentVolume.enabled | bool | `false` | Configures persistency on Redis nodes | | redis-ha.redis.config | object | See [values.yaml] | Any valid redis config options in this section will be applied to each server (see `redis-ha` chart) | | redis-ha.redis.config.save | string | `'""'` | Will save the DB if both the given number of seconds and the given number of write operations against the DB occurred. `""` is disabled | | redis-ha.redis.masterGroupName | string | `"argocd"` | Redis convention for naming the cluster group: must match `^[\\w-\\.]+$` and can be templated | | redis-ha.topologySpreadConstraints.enabled | bool | `false` | Enable Redis HA topology spread constraints | | redis-ha.topologySpreadConstraints.maxSkew | string | `""` (defaults to `1`) | Max skew of pods tolerated | | redis-ha.topologySpreadConstraints.topologyKey | string | `""` (defaults to `topology.kubernetes.io/zone`) | Topology key for spread | | redis-ha.topologySpreadConstraints.whenUnsatisfiable | string | `""` (defaults to `ScheduleAnyway`) | Enforcement policy, hard or soft | | redis-ha.exporter.image | string | `nil` (follows subchart default) | Exporter image | | redis-ha.exporter.tag | string | `nil` (follows subchart default) | Exporter tag | | redis-ha.haproxy.image.repository | string | `nil` (follows subchart default) | HAProxy Image Repository | | redis-ha.haproxy.image.tag | string | `nil` (follows subchart default) | HAProxy Image Tag | | redis-ha.image.repository | string | `nil` (follows subchart default) | Redis image repository | ### Option 3 - External Redis If you want to use an existing Redis (eg. a managed service from a cloud provider), you can use these parameters: | Key | Type | Default | Description | |-----|------|---------|-------------| | externalRedis.existingSecret | string | `""` | The name of an existing secret with Redis credentials (must contain key `redis-password`). When it's set, the `externalRedis.password` parameter is ignored | | externalRedis.host | string | `""` | External Redis server host | | externalRedis.password | string | `""` | External Redis password | | externalRedis.port | int | `6379` | External Redis server port | | externalRedis.secretAnnotations | object | `{}` | External Redis Secret annotations | ## ApplicationSet | Key | Type | Default | Description | |-----|------|---------|-------------| | applicationSet.affinity | object | `{}` | Assign custom [affinity] rules | | applicationSet.args.debug | bool | `false` | Print debug logs | | applicationSet.args.dryRun | bool | `false` | Enable dry run mode | | applicationSet.args.enableLeaderElection | bool | `false` | The default leader election setting | | applicationSet.args.metricsAddr | string | `":8080"` | The default metric address | | applicationSet.args.policy | string | `"sync"` | How application is synced between the generator and the cluster | | applicationSet.args.probeBindAddr | string | `":8081"` | The default health check port | | applicationSet.enabled | bool | `true` | Enable Application Set controller | | applicationSet.extraArgs | list | `[]` | List of extra cli args to add | | applicationSet.extraContainers | list | `[]` | Additional containers to be added to the applicationset controller pod | | applicationSet.extraEnv | list | `[]` | Environment variables to pass to the controller | | applicationSet.extraEnvFrom | list | `[]` (See [values.yaml]) | envFrom to pass to the controller | | applicationSet.extraVolumeMounts | list | `[]` | List of extra mounts to add (normally used with extraVolumes) | | applicationSet.extraVolumes | list | `[]` | List of extra volumes to add | | applicationSet.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the application set controller | | applicationSet.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the application set controller | | applicationSet.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the application set controller | | applicationSet.imagePullSecrets | list | `[]` | If defined, uses a Secret to pull an image from a private Docker registry or repository. | | applicationSet.logFormat | string | `""` (defaults to global.logging.format) | ApplicationSet controller log format. Either `text` or `json` | | applicationSet.logLevel | string | `""` (defaults to global.logging.level) | ApplicationSet controller log level. One of: `debug`, `info`, `warn`, `error` | | applicationSet.metrics.enabled | bool | `false` | Deploy metrics service | | applicationSet.metrics.service.annotations | object | `{}` | Metrics service annotations | | applicationSet.metrics.service.labels | object | `{}` | Metrics service labels | | applicationSet.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | | applicationSet.metrics.service.servicePort | int | `8085` | Metrics service port | | applicationSet.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | | applicationSet.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | | applicationSet.metrics.serviceMonitor.interval | string | `"30s"` | Prometheus ServiceMonitor interval | | applicationSet.metrics.serviceMonitor.metricRelabelings | list | `[]` | Prometheus [MetricRelabelConfigs] to apply to samples before ingestion | | applicationSet.metrics.serviceMonitor.namespace | string | `""` | Prometheus ServiceMonitor namespace | | applicationSet.metrics.serviceMonitor.relabelings | list | `[]` | Prometheus [RelabelConfigs] to apply to samples before scraping | | applicationSet.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | | applicationSet.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | | applicationSet.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | | applicationSet.name | string | `"applicationset-controller"` | Application Set controller name string | | applicationSet.nodeSelector | object | `{}` | [Node selector] | | applicationSet.podAnnotations | object | `{}` | Annotations for the controller pods | | applicationSet.podLabels | object | `{}` | Labels for the controller pods | | applicationSet.podSecurityContext | object | `{}` | Pod Security Context | | applicationSet.priorityClassName | string | `""` | If specified, indicates the pod's priority. If not specified, the pod priority will be default or zero if there is no default. | | applicationSet.replicaCount | int | `1` | The number of controller pods to run | | applicationSet.resources | object | `{}` | Resource limits and requests for the controller pods. | | applicationSet.securityContext | object | `{}` | Security Context | | applicationSet.service.annotations | object | `{}` | Application set service annotations | | applicationSet.service.labels | object | `{}` | Application set service labels | | applicationSet.service.port | int | `7000` | Application set service port | | applicationSet.service.portName | string | `"webhook"` | Application set service port name | | applicationSet.serviceAccount.annotations | object | `{}` | Annotations to add to the service account | | applicationSet.serviceAccount.create | bool | `true` | Specifies whether a service account should be created | | applicationSet.serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template | | applicationSet.tolerations | list | `[]` | [Tolerations] for use with node taints | | applicationSet.webhook.ingress.annotations | object | `{}` | Additional ingress annotations | | applicationSet.webhook.ingress.enabled | bool | `false` | Enable an ingress resource for Webhooks | | applicationSet.webhook.ingress.extraPaths | list | `[]` | Additional ingress paths | | applicationSet.webhook.ingress.hosts | list | `[]` | List of ingress hosts | | applicationSet.webhook.ingress.ingressClassName | string | `""` | Defines which ingress controller will implement the resource | | applicationSet.webhook.ingress.labels | object | `{}` | Additional ingress labels | | applicationSet.webhook.ingress.pathType | string | `"Prefix"` | Ingress path type. One of `Exact`, `Prefix` or `ImplementationSpecific` | | applicationSet.webhook.ingress.paths | list | `["/api/webhook"]` | List of ingress paths | | applicationSet.webhook.ingress.tls | list | `[]` | Ingress TLS configuration | ## Notifications | Key | Type | Default | Description | |-----|------|---------|-------------| | notifications.affinity | object | `{}` | Assign custom [affinity] rules | | notifications.argocdUrl | string | `nil` | Argo CD dashboard url; used in place of {{.context.argocdUrl}} in templates | | notifications.bots.slack.affinity | object | `{}` | Assign custom [affinity] rules | | notifications.bots.slack.containerSecurityContext | object | `{}` | Container Security Context | | notifications.bots.slack.enabled | bool | `false` | Enable slack bot | | notifications.bots.slack.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the Slack bot | | notifications.bots.slack.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the Slack bot | | notifications.bots.slack.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the Slack bot | | notifications.bots.slack.imagePullSecrets | list | `[]` | Secrets with credentials to pull images from a private registry | | notifications.bots.slack.nodeSelector | object | `{}` | [Node selector] | | notifications.bots.slack.resources | object | `{}` | Resource limits and requests for the Slack bot | | notifications.bots.slack.securityContext | object | `{"runAsNonRoot":true}` | Pod Security Context | | notifications.bots.slack.service.annotations | object | `{}` | Service annotations for Slack bot | | notifications.bots.slack.service.port | int | `80` | Service port for Slack bot | | notifications.bots.slack.service.type | string | `"LoadBalancer"` | Service type for Slack bot | | notifications.bots.slack.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | | notifications.bots.slack.serviceAccount.create | bool | `true` | Specifies whether a service account should be created | | notifications.bots.slack.serviceAccount.name | string | `"argocd-notifications-bot"` | The name of the service account to use. | | notifications.bots.slack.tolerations | list | `[]` | [Tolerations] for use with node taints | | notifications.bots.slack.updateStrategy | object | `{"type":"Recreate"}` | The deployment strategy to use to replace existing pods with new ones | | notifications.cm.create | bool | `true` | Whether helm chart creates controller config map | | notifications.containerSecurityContext | object | `{}` | Container Security Context | | notifications.context | object | `{}` | Define user-defined context | | notifications.enabled | bool | `true` | Enable Notifications controller | | notifications.extraArgs | list | `[]` | Extra arguments to provide to the controller | | notifications.extraEnv | list | `[]` | Additional container environment variables | | notifications.extraVolumeMounts | list | `[]` | List of extra mounts to add (normally used with extraVolumes) | | notifications.extraVolumes | list | `[]` | List of extra volumes to add | | notifications.image.imagePullPolicy | string | `""` (defaults to global.image.imagePullPolicy) | Image pull policy for the notifications controller | | notifications.image.repository | string | `""` (defaults to global.image.repository) | Repository to use for the notifications controller | | notifications.image.tag | string | `""` (defaults to global.image.tag) | Tag to use for the notifications controller | | notifications.imagePullSecrets | list | `[]` | Secrets with credentials to pull images from a private registry | | notifications.logFormat | string | `""` (defaults to global.logging.format) | Application controller log format. Either `text` or `json` | | notifications.logLevel | string | `""` (defaults to global.logging.level) | Application controller log level. One of: `debug`, `info`, `warn`, `error` | | notifications.metrics.enabled | bool | `false` | Enables prometheus metrics server | | notifications.metrics.port | int | `9001` | Metrics port | | notifications.metrics.service.annotations | object | `{}` | Metrics service annotations | | notifications.metrics.service.labels | object | `{}` | Metrics service labels | | notifications.metrics.service.portName | string | `"http-metrics"` | Metrics service port name | | notifications.metrics.serviceMonitor.additionalLabels | object | `{}` | Prometheus ServiceMonitor labels | | notifications.metrics.serviceMonitor.enabled | bool | `false` | Enable a prometheus ServiceMonitor | | notifications.metrics.serviceMonitor.scheme | string | `""` | Prometheus ServiceMonitor scheme | | notifications.metrics.serviceMonitor.selector | object | `{}` | Prometheus ServiceMonitor selector | | notifications.metrics.serviceMonitor.tlsConfig | object | `{}` | Prometheus ServiceMonitor tlsConfig | | notifications.name | string | `"notifications-controller"` | Notifications controller name string | | notifications.nodeSelector | object | `{}` | [Node selector] | | notifications.notifiers | object | See [values.yaml] | Configures notification services such as slack, email or custom webhook | | notifications.podAnnotations | object | `{}` | Annotations to be applied to the controller Pods | | notifications.podLabels | object | `{}` | Labels to be applied to the controller Pods | | notifications.priorityClassName | string | `""` | Priority class for the controller pods | | notifications.resources | object | `{}` | Resource limits and requests for the controller | | notifications.secret.annotations | object | `{}` | key:value pairs of annotations to be added to the secret | | notifications.secret.create | bool | `true` | Whether helm chart creates controller secret | | notifications.secret.items | object | `{}` | Generic key:value pairs to be inserted into the secret | | notifications.securityContext | object | `{"runAsNonRoot":true}` | Pod Security Context | | notifications.serviceAccount.annotations | object | `{}` | Annotations applied to created service account | | notifications.serviceAccount.create | bool | `true` | Specifies whether a service account should be created | | notifications.serviceAccount.name | string | `"argocd-notifications-controller"` | The name of the service account to use. | | notifications.subscriptions | list | `[]` | Contains centrally managed global application subscriptions | | notifications.templates | object | `{}` | The notification template is used to generate the notification content | | notifications.tolerations | list | `[]` | [Tolerations] for use with node taints | | notifications.triggers | object | `{}` | The trigger defines the condition when the notification should be sent | | notifications.updateStrategy | object | `{"type":"Recreate"}` | The deployment strategy to use to replace existing pods with new ones | ### Using AWS ALB Ingress Controller With GRPC If you are using an AWS ALB Ingress controller, you will need to set `server.ingressGrpc.isAWSALB` to `true`. This will create a second service with the annotation `alb.ingress.kubernetes.io/backend-protocol-version: HTTP2` and modify the server ingress to add a condition annotation to route GRPC traffic to the new service. Example: ```yaml server: ingress: enabled: true annotations: alb.ingress.kubernetes.io/backend-protocol: HTTPS alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]' alb.ingress.kubernetes.io/scheme: internal alb.ingress.kubernetes.io/target-type: ip ingressGrpc: enabled: true isAWSALB: true awsALB: serviceType: ClusterIP ``` ---------------------------------------------- Autogenerated from chart metadata using [helm-docs](https://github.com/norwoodj/helm-docs) [Argo CD RBAC policy]: https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/ [affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ [BackendConfigSpec]: https://cloud.google.com/kubernetes-engine/docs/concepts/backendconfig#backendconfigspec_v1beta1_cloudgooglecom [CSS styles]: https://argo-cd.readthedocs.io/en/stable/operator-manual/custom-styles/ [external cluster credentials]: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#clusters [FrontendConfigSpec]: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#configuring_ingress_features_through_frontendconfig_parameters [Declarative setup]: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup [gRPC-ingress]: https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/ [HPA]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ [MetricRelabelConfigs]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs [Node selector]: https://kubernetes.io/docs/user-guide/node-selection/ [probe]: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes [RelabelConfigs]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config [Tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ [TopologySpreadConstraints]: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ [values.yaml]: values.yaml [v2.2 to 2.3 upgrade instructions]: https://github.com/argoproj/argo-cd/blob/v2.3.0/docs/operator-manual/upgrading/2.2-2.3.md status: deployed name: argocd namespace: argocd resources: - apiVersion: v1 kind: ServiceAccount name: argocd-application-controller namespace: argocd - apiVersion: v1 kind: ServiceAccount name: argocd-applicationset-controller namespace: argocd - apiVersion: v1 kind: ServiceAccount name: argocd-notifications-controller namespace: argocd - apiVersion: v1 kind: ServiceAccount name: argocd-repo-server namespace: argocd - apiVersion: v1 kind: ServiceAccount name: argocd-server namespace: argocd - apiVersion: v1 kind: ServiceAccount name: argocd-dex-server namespace: argocd - apiVersion: v1 kind: Secret name: argocd-notifications-secret namespace: argocd - apiVersion: v1 kind: Secret name: argocd-secret namespace: argocd - apiVersion: v1 kind: ConfigMap name: argocd-cm namespace: argocd - apiVersion: v1 kind: ConfigMap name: argocd-gpg-keys-cm namespace: argocd - apiVersion: v1 kind: ConfigMap name: argocd-notifications-cm namespace: argocd - apiVersion: v1 kind: ConfigMap name: argocd-rbac-cm namespace: argocd - apiVersion: v1 kind: ConfigMap name: argocd-ssh-known-hosts-cm namespace: argocd - apiVersion: v1 kind: ConfigMap name: argocd-tls-certs-cm namespace: argocd - apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition name: applications.argoproj.io - apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition name: applicationsets.argoproj.io - apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition name: argocdextensions.argoproj.io - apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition name: appprojects.argoproj.io - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole name: argocd-application-controller - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole name: argocd-server - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding name: argocd-application-controller - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding name: argocd-server - apiVersion: rbac.authorization.k8s.io/v1 kind: Role name: argocd-application-controller namespace: argocd - apiVersion: rbac.authorization.k8s.io/v1 kind: Role name: argocd-applicationset-controller namespace: argocd - apiVersion: rbac.authorization.k8s.io/v1 kind: Role name: argocd-notifications-controller namespace: argocd - apiVersion: rbac.authorization.k8s.io/v1 kind: Role name: argocd-repo-server namespace: argocd - apiVersion: rbac.authorization.k8s.io/v1 kind: Role name: argocd-server namespace: argocd - apiVersion: rbac.authorization.k8s.io/v1 kind: Role name: argocd-dex-server namespace: argocd - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding name: argocd-application-controller namespace: argocd - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding name: argocd-applicationset-controller namespace: argocd - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding name: argocd-notifications-controller namespace: argocd - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding name: argocd-repo-server namespace: argocd - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding name: argocd-server namespace: argocd - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding name: argocd-dex-server namespace: argocd - apiVersion: v1 kind: Service name: argocd-application-controller namespace: argocd - apiVersion: v1 kind: Service name: argocd-applicationset-controller namespace: argocd - apiVersion: v1 kind: Service name: argocd-repo-server namespace: argocd - apiVersion: v1 kind: Service name: argocd-server namespace: argocd - apiVersion: v1 kind: Service name: argocd-dex-server namespace: argocd - apiVersion: v1 kind: Service name: argocd-redis namespace: argocd - apiVersion: apps/v1 kind: Deployment name: argocd-applicationset-controller namespace: argocd - apiVersion: apps/v1 kind: Deployment name: argocd-notifications-controller namespace: argocd - apiVersion: apps/v1 kind: Deployment name: argocd-repo-server namespace: argocd - apiVersion: apps/v1 kind: Deployment name: argocd-server namespace: argocd - apiVersion: apps/v1 kind: Deployment name: argocd-dex-server namespace: argocd - apiVersion: apps/v1 kind: Deployment name: argocd-redis namespace: argocd - apiVersion: apps/v1 kind: StatefulSet name: argocd-application-controller namespace: argocd values: global: cattle: clusterId: c-m-9sw4zt6m clusterName: miami rkePathPrefix: "" rkeWindowsPathPrefix: "" systemDefaultRegistry: "" systemProjectId: p-7z4lh url: https://rancher.labza systemDefaultRegistry: "" version: 3 status: observedGeneration: 6 summary: state: deployed ```
AydinChavez commented 1 year ago

Workaround is described here guys: https://github.com/argoproj/argo-helm/issues/1479#issuecomment-1254089715

Syntax3rror404 commented 1 year ago

I think I basicly waiting for 5.5.1 :D and go with 4.4.8 instead of dirty hacks