Open zonnie opened 3 years ago
Might be because of this: https://helm.sh/blog/new-location-stable-incubator-charts/ - the address for stable repo has changed. @alexmt maybe it would be good to have a 1.7.12 with helm 2.17 included, that has the new address of stable updated in the binary, otherwise upgrade to 1.8 is mandatory
I believe another fix for this may be to update your stable repo in the argocd-cm ConfigMap as follows:
data:
helm.repositories: |
- url: https://charts.helm.sh/stable
name: stable
After this, refresh your Argo applications.
@zonnie sorry, now I saw you already are on 1.8.1 server, so it is not that
@JasP19 does it work for helm 2? It has the stable repo embedded the article above says you need to remove stable first
@lcostea it worked for us with Helm 2 and ArgoCD v1.6.1. I believe this is similar to overriding the stable repo when using ArgoCD without internet access as in #1412.
Is there an example git location which can reproduce this easily? It's hard to fix this without being able to reproduce it.
I'm not sure when this reproduces. If you can explain what causes the symptom I might be able to try and reproduce. The error is completely internal and means nothing to me - maybe more details can help me find a way.
I believe another fix for this may be to update your stable repo in the argocd-cm ConfigMap as follows:
data: helm.repositories: | - url: https://charts.helm.sh/stable name: stable
After this, refresh your Argo applications.
@zonnie did you get a chance to test the solution I proposed above?
No, I didn't - can u please explain briefly how this is related?
I'll check this solution anyway and report back
Might be because of this: https://helm.sh/blog/new-location-stable-incubator-charts/ - the address for stable repo has changed. @alexmt maybe it would be good to have a 1.7.12 with helm 2.17 included, that has the new address of stable updated in the binary, otherwise upgrade to 1.8 is mandatory
@zonnie this is related to the fact that Helm recently changed the location of the stable repo as mentioned by @lcostea, above. I believe the Helm 2 binary is now failing to resolve/access the old stable repo. By updating the ConfigMap, you override the stable repo to point at the new URL with the new location. We were having the same issue in our cluster. We made this change and noted that it solved the issue when we refreshed our apps.
Might be because of this: https://helm.sh/blog/new-location-stable-incubator-charts/ - the address for stable repo has changed.
@alexmt maybe it would be good to have a 1.7.12 with helm 2.17 included, that has the new address of stable updated in the binary, otherwise upgrade to 1.8 is mandatory
@zonnie this is related to the fact that Helm recently changed the location of the stable repo as mentioned by @lcostea, above. I believe the Helm 2 binary is now failing to resolve/access the old stable repo. By updating the ConfigMap, you override the stable repo to point at the new URL with the new location. We were having the same issue in our cluster. We made this change and noted that it solved the issue when we refreshed our apps.
Thanks @JasP19 I will give it a shot as I can
Is 1.7.11 also affected by this?
@gzur unfortunately I am not certain. If you are using Helm 2 and have suddenly started seeing apps in an Unkown
state with a helm dependency build
failure then it may be worth attempting the solution I proposed. It worked for us on ArgoCD v1.6.1 and it's only a minor change.
@gzur unfortunately I am not certain. If you are using Helm 2 and have suddenly started seeing apps in an
Unkown
state with ahelm dependency build
failure then it may be worth attempting the solution I proposed. It worked for us on ArgoCD v1.6.1 and it's only a minor change.
I'm running v1.7.6 as is - I'm going to give updating to 1.7.11 the old college try and report back.
I'm running v1.7.6 as is - I'm going to give updating to 1.7.11 the old college try and report back.
I was able to upgrade to 1.7.11 - but it had no effect.
I ended up applying the following configuration through a custom values.yaml
file:
argocd:
server:
config:
repositories: |
[...]
- type: helm
name: stable
url: https://charts.helm.sh/stable
[...]
After this, I had to helm template [...] | kubectl apply [...]
for it to kick in, since ArgoCD is non-functional.
Might be because of this: https://helm.sh/blog/new-location-stable-incubator-charts/ - the address for stable repo has changed.
I believe the error described in this issue is unrelated to the Helm stable
repository migration. I experienced that first hand, and the error message produced looks like this:
helm2 dependency build` failed exit status 1:
Error: open /tmp/helm769321715/repository/cache/stable-index.yaml: no such file or directory
Which I successfully addressed with a configuration change described above: https://github.com/argoproj/argo-cd/issues/5107#issuecomment-751783856
The error described in the issue:
rename charts tmpcharts: file exists
is some helm-related issue (that I actually remember seeing before, but I can't remember what caused it)
No matter which version v1.8.1
or v1.8.2
both produces the same. Even with option which is provided above / to add stable
as Helm repository/. Several days passed, sometimes I see 4 Unknown
state, sometimes ~36 Unknown
state.
Same issue here. seems to be a helm issue: https://github.com/helm/helm/issues/5567 with corresponding PR https://github.com/helm/helm/pull/8846
The issue is intermittent, and will fix itself after a hard resefresh. How does argocd handle refresh errors? to manually do a hard refresh on our deploys is not a good experience. I think refresh error handling could use some improvements regardless of the helm fixes
We have been seeing this intermittently, with an App going into Unknown
state with the aforementioned error message.
I'm wondering whether this has anything to do with the application-controller
being too hard on the reposerver
?
The application-controller
is invoking this GenerateManifest
method over grpc
, with the reposerver
acting on that request by interfacing directly with the file-system - doing helm dep update
and such.
The ArgoCD Application that is causing us the most trouble has 20 Helm charts as as dependencies, and it can take a while for them all to download. Is it possible that the application-controller
is getting fed up with waiting and retrying, causing the reposerver
to start running helm dep update
with another dep update
operation already running?
This could cause these ./tmpcharts
shenanigans, because apparently, Helm stores the charts in ./tmpcharts
before moving them all over to ./charts
once it has finished fetching all the dependencies.
Would setting ARGOCD_SYNC_WAVE_DELAY
to something higher than 2 seconds (the default) affect this at all?
Would setting
ARGOCD_SYNC_WAVE_DELAY
to something higher than 2 seconds (the default) affect this at all?
Nope.
I was able to upgrade to 1.7.11 - but it had no effect.
I ended up applying the following configuration through a custom
values.yaml
file:argocd: server: config: repositories: | [...] - type: helm name: stable url: https://charts.helm.sh/stable [...]
After this, I had to
helm template [...] | kubectl apply [...]
for it to kick in, since ArgoCD is non-functional.
I am experiencing the same issue on aws eks. I have tried this suggestion but it does not fix the issue.
We too are experiencing this quite often. Our workaround is to just run redis-cli flushall
in the Redis pod.
We finally resolved this by increasing
--repo-server-timeout-seconds
on the application-controllerARGOCD_EXEC_TIMEOUT
on the repo-serverargocd:
controller:
extraArgs:
- --repo-server-timeout-seconds
- "500"
repoServer:
env:
- name: "ARGOCD_EXEC_TIMEOUT"
value: "5m"
Doesn't make me happy, but until helm dep update
is thread-safe(https://github.com/helm/helm/pull/8846#issuecomment-768479847), it'll have to do.
Just popping in to say that I was seeing this, and the context deadline exceeded
on a Raspberry Pi cluster, Argo instance is here if you're curious as to the config: http://argocd.apps.blah.cloud
What was visible in Grafana was that the repo server pod was consuming all of the (admittedly small) host's CPU, so a combination of the following cleared all of that up for me:
controller:
extraArgs:
- --repo-server-timeout-seconds
- "500"
- --status-processors
- "10"
- --operation-processors
- "5"
repoServer:
replicas: 3
env:
- name: "ARGOCD_EXEC_TIMEOUT"
value: "5m"
The number of replicas on repoServer helped to distribute the load across the K8s cluster nodes, so it wasn't hammering a single node, and the timeout extensions also helped in the case of long running tasks (due to slow CPUs).
@zonnie Can you try the solution suggested by @gzur https://github.com/argoproj/argo-cd/issues/5107#issuecomment-776300229 to see if it resolves the issue for you as well?
@zonnie Can you try the solution suggested by @gzur https://github.com/argoproj/argo-cd/issues/5107#issuecomment-776300229 to see if it resolves the issue for you as well?
I didn't run into this lately so I can't confirm this 😕
I experienced this under a different situation --
I was changing the target revision of the app in the UI, from a revision where the chart did not exist to one where it did. When the error occurred the UI reverted the change to the target revision
Rather than using the UI, I edited the target revision of the app directly in the manifest. This time the change persisted, and I was able to see a different error returned by helm template
when ArgoCD tried to render the chart (in my case it was caused by a helm2 parent chart trying to use a helm3 chart as a dependency)
I faced this issue too. In my case, I was trying to deploy a chart from a private helm registry.
To determine the ideal resource (limits and requests), I had set the minimum in my argocd-ha installation (setup using community maintained helm chart) and planned to keep increasing it gradually until I figure out the right configuration where everything just works.
I observed from the output of kubectl top pods, that RepoServer had almost reached the limits I had set, and hence was taking a lot of time to finish helm dep update
After ~90s, helm dependency build timed out in reposerver and I could see following log
time="2021-02-26T10:54:14Z" level=error msg="`helm dependency build` failed timeout after 1m30s" execID=SVEuN
Argo Server log had this error:
021/02/26 10:52:00 proto: tag has too few fields: "-"
time="2021-02-26T10:52:00Z" level=info msg="received unary call /application.ApplicationService/Create" grpc.method=Create grpc.request.claims="{\"iat\":1614336702,\"iss\":\"argocd\",\"nbf\":1614336702,\"sub\":\"admin\"}" grpc.request.content="%!v(PANIC=String method: reflect.Value.Interface: cannot return value obtained from unexported field or method)" grpc.service=application.ApplicationService grpc.start_time="2021-02-26T10:52:00Z" span.kind=server system=grpc
time="2021-02-26T10:53:00Z" level=info msg="finished unary call with code InvalidArgument" error="rpc error: code = InvalidArgument desc = application spec is invalid: InvalidSpecError: Unable to generate manifests in <my-app>: rpc error: code = Canceled desc = context canceled" grpc.code=InvalidArgument grpc.method=Create grpc.service=application.ApplicationService grpc.start_time="2021-02-26T10:52:00Z" grpc.time_ms=59999.926 span.kind=server system=grpc
I tried increasing the Resources set in Argocd repo server and then the same helm chart got rendered correctly
I have the exact same issue - with argocd 1.8.4.
It happened after having tried a few options on metrics-server which crashed all the time..
"rpc error: code = Unknown desc = Manifest generation error (cached): helm dependency build
failed exit status 1: Error: unable to move current charts to tmp dir: link error: cannot rename charts to tmpcharts: rename charts tmpcharts: file exists"
I am pointing to stable charts repo as metrics-server has not been migrated to kubernetes/metrics-server yet
I get same error from my velero chart and that is NOT using the deprecated helm stable chart repo.. so it does not seem to be related to the repo source..
this is on an aws cluster - and currently no resource limits set. I am trying setting repo-server replica's to 2 instead of 1.
It seems it was enough to get repo-server pod to run on a different node..
It happened again - and now with 2 reposervers.. so it was not enough to scale that up :( I tried kill'ing both reposerver pods so they got recreated.. but refresh is NOT making it retry.. Error "rename charts tmpcharts: file exists" is not being updated (still X minutes old and getting older). After killing 'server' pod also - i could suddenly choose "hard refresh" (is that just hard to hit exactly, or ?) and issue was resolved. (for now).
same issue in v2.0.0, after I delete helm cart and add again, got following error msg:
rpc error: code = Unknown desc = Manifest generation error (cached): `helm repo add --username ****** --password ****** airwallex-charts https://xxx` failed exit status 1: Error: repository name (xxx-charts) already exists, please specify a different name
Just hit this in v2.0.1 also.. when doing recovery test.. so syncing many (10+) applications (all helm applications) reproduces it.
I too keep hitting this when syncing a large number of applications (29 - but a few of those are app-of-apps) as part of cluster bootstrap.
However, the main problem we are seeing is that once an application enters the Error state with (for example):
ComparisonError: rpc error: code = Unknown desc = `helm repo add charts.helm.sh https://charts.helm.sh/stable` failed signal: segmentation fault (core dumped): qemu: uncaught target signal 11 (Segmentation fault) - core dumped
Then it will stay with "Current Sync Status" as Unknown
perpetually, and never retry.
It would appear there is some digging to be done in the logs to figure out why these apps never retry.
Seeing as this issue is "resolved" temporarily - by killing repo-server pods, so they get recreated - its clearly a caching problem. My guess is that reposerver runs Helm in parallel on SAME pod - which Helm does not actually support - and hence once you hit 2 simultaneous Helm runs on same pod - Helm breaks and never recovers from that (due to caching). A simple "lock" around Helm - ensuring it is not run simultaenously, should confirm this as the issue (and resolve it :)
It seems there is some mitigation for tmpcharts clean up in Helm 3.7.0
, https://github.com/helm/helm/pull/9889
Related to this comment.
https://github.com/argoproj/argo-cd/issues/5107#issuecomment-776300229
It is not as a good solution as having tmp dirs suffixed with a random string, but it will help. Probably we should release a version with Helm 3.7.0.
There is already a PR for that:
Related issue:
We finally resolved this by increasing
--repo-server-timeout-seconds
on the application-controllerARGOCD_EXEC_TIMEOUT
on the repo-serverargocd: controller: extraArgs: - --repo-server-timeout-seconds - "500" repoServer: env: - name: "ARGOCD_EXEC_TIMEOUT" value: "5m"
Doesn't make me happy, but until
helm dep update
is thread-safe(helm/helm#8846 (comment)), it'll have to do.
@gzur I'm not sure I understand where you pass these values ? can you please elaborate ?
I don't have access to these manifests anymorel but iirc, I added the command-line arg and env var through Helm values.yaml
:
We were using the ArgoCD chart as a subchart (hence the argocd:
root key in my example.
The actual configuration points are:
controller:
extraArgs:
- --repo-server-timeout-seconds
- "500"
repoServer:
env:
- name: "ARGOCD_EXEC_TIMEOUT"
value: "5m"
That worked
This is my repo-server
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "5"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"repo-server","app.kubernetes.io/name":"argocd-repo-server","app.kubernetes.io/part-of":"argocd"},"name":"argocd-repo-server","namespace":"argocd"},"spec":{"replicas":2,"selector":{"matchLabels":{"app.kubernetes.io/name":"argocd-repo-server"}},"template":{"metadata":{"labels":{"app.kubernetes.io/name":"argocd-repo-server"}},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchLabels":{"app.kubernetes.io/name":"argocd-repo-server"}},"topologyKey":"failure-domain.beta.kubernetes.io/zone"},"weight":100}],"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"app.kubernetes.io/name":"argocd-repo-server"}},"topologyKey":"kubernetes.io/hostname"}]}},"automountServiceAccountToken":false,"containers":[{"command":["entrypoint.sh","argocd-repo-server","--redis","argocd-redis-ha-haproxy:6379"],"env":[{"name":"ARGOCD_RECONCILIATION_TIMEOUT","valueFrom":{"configMapKeyRef":{"key":"timeout.reconciliation","name":"argocd-cm","optional":true}}},{"name":"ARGOCD_REPO_SERVER_LOGFORMAT","valueFrom":{"configMapKeyRef":{"key":"reposerver.log.format","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_REPO_SERVER_LOGLEVEL","valueFrom":{"configMapKeyRef":{"key":"reposerver.log.level","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_REPO_SERVER_PARALLELISM_LIMIT","valueFrom":{"configMapKeyRef":{"key":"reposerver.parallelism.limit","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_REPO_SERVER_DISABLE_TLS","valueFrom":{"configMapKeyRef":{"key":"reposerver.disable.tls","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_TLS_MIN_VERSION","valueFrom":{"configMapKeyRef":{"key":"reposerver.tls.minversion","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_TLS_MAX_VERSION","valueFrom":{"configMapKeyRef":{"key":"reposerver.tls.maxversion","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_TLS_CIPHERS","valueFrom":{"configMapKeyRef":{"key":"reposerver.tls.ciphers","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_REPO_CACHE_EXPIRATION","valueFrom":{"configMapKeyRef":{"key":"reposerver.repo.cache.expiration","name":"argocd-cmd-params-cm","optional":true}}},{"name":"REDIS_SERVER","valueFrom":{"configMapKeyRef":{"key":"redis.server","name":"argocd-cmd-params-cm","optional":true}}},{"name":"REDISDB","valueFrom":{"configMapKeyRef":{"key":"redis.db","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_DEFAULT_CACHE_EXPIRATION","valueFrom":{"configMapKeyRef":{"key":"reposerver.default.cache.expiration","name":"argocd-cmd-params-cm","optional":true}}},{"name":"HELM_CACHE_HOME","value":"/helm-working-dir"},{"name":"HELM_CONFIG_HOME","value":"/helm-working-dir"},{"name":"HELM_DATA_HOME","value":"/helm-working-dir"}],"image":"quay.io/argoproj/argocd:v2.1.6","imagePullPolicy":"Always","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz?full=true","port":8084},"initialDelaySeconds":30,"periodSeconds":5},"name":"argocd-repo-server","ports":[{"containerPort":8081},{"containerPort":8084}],"readinessProbe":{"httpGet":{"path":"/healthz","port":8084},"initialDelaySeconds":5,"periodSeconds":10},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["all"]},"readOnlyRootFilesystem":true,"runAsNonRoot":true},"volumeMounts":[{"mountPath":"/app/config/ssh","name":"ssh-known-hosts"},{"mountPath":"/app/config/tls","name":"tls-certs"},{"mountPath":"/app/config/gpg/source","name":"gpg-keys"},{"mountPath":"/app/config/gpg/keys","name":"gpg-keyring"},{"mountPath":"/app/config/reposerver/tls","name":"argocd-repo-server-tls"},{"mountPath":"/tmp","name":"tmp"},{"mountPath":"/helm-working-dir","name":"helm-working-dir"}]}],"volumes":[{"configMap":{"name":"argocd-ssh-known-hosts-cm"},"name":"ssh-known-hosts"},{"configMap":{"name":"argocd-tls-certs-cm"},"name":"tls-certs"},{"configMap":{"name":"argocd-gpg-keys-cm"},"name":"gpg-keys"},{"emptyDir":{},"name":"gpg-keyring"},{"emptyDir":{},"name":"tmp"},{"emptyDir":{},"name":"helm-working-dir"},{"name":"argocd-repo-server-tls","secret":{"items":[{"key":"tls.crt","path":"tls.crt"},{"key":"tls.key","path":"tls.key"},{"key":"ca.crt","path":"ca.crt"}],"optional":true,"secretName":"argocd-repo-server-tls"}}]}}}}
creationTimestamp: "2021-07-01T13:58:33Z"
generation: 5
labels:
app.kubernetes.io/component: repo-server
app.kubernetes.io/name: argocd-repo-server
app.kubernetes.io/part-of: argocd
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations: {}
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/part-of: {}
f:spec:
f:progressDeadlineSeconds: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app.kubernetes.io/name: {}
f:spec:
f:affinity:
.: {}
f:podAntiAffinity:
.: {}
f:preferredDuringSchedulingIgnoredDuringExecution: {}
f:requiredDuringSchedulingIgnoredDuringExecution: {}
f:automountServiceAccountToken: {}
f:containers:
k:{"name":"argocd-repo-server"}:
.: {}
f:command: {}
f:env:
.: {}
k:{"name":"ARGOCD_DEFAULT_CACHE_EXPIRATION"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_RECONCILIATION_TIMEOUT"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_REPO_CACHE_EXPIRATION"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_REPO_SERVER_DISABLE_TLS"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_REPO_SERVER_LOGFORMAT"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_REPO_SERVER_LOGLEVEL"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_REPO_SERVER_PARALLELISM_LIMIT"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_TLS_CIPHERS"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_TLS_MAX_VERSION"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_TLS_MIN_VERSION"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"REDIS_SERVER"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"REDISDB"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
f:imagePullPolicy: {}
f:livenessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":8081,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:protocol: {}
k:{"containerPort":8084,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:protocol: {}
f:readinessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:resources: {}
f:securityContext:
.: {}
f:allowPrivilegeEscalation: {}
f:capabilities:
.: {}
f:drop: {}
f:readOnlyRootFilesystem: {}
f:runAsNonRoot: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:volumeMounts:
.: {}
k:{"mountPath":"/app/config/gpg/keys"}:
.: {}
f:mountPath: {}
f:name: {}
k:{"mountPath":"/app/config/gpg/source"}:
.: {}
f:mountPath: {}
f:name: {}
k:{"mountPath":"/app/config/reposerver/tls"}:
.: {}
f:mountPath: {}
f:name: {}
k:{"mountPath":"/app/config/ssh"}:
.: {}
f:mountPath: {}
f:name: {}
k:{"mountPath":"/app/config/tls"}:
.: {}
f:mountPath: {}
f:name: {}
k:{"mountPath":"/tmp"}:
.: {}
f:mountPath: {}
f:name: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
f:volumes:
.: {}
k:{"name":"argocd-repo-server-tls"}:
.: {}
f:name: {}
f:secret:
.: {}
f:defaultMode: {}
f:items: {}
f:optional: {}
f:secretName: {}
k:{"name":"gpg-keyring"}:
.: {}
f:emptyDir: {}
f:name: {}
k:{"name":"gpg-keys"}:
.: {}
f:configMap:
.: {}
f:defaultMode: {}
f:name: {}
f:name: {}
k:{"name":"ssh-known-hosts"}:
.: {}
f:configMap:
.: {}
f:defaultMode: {}
f:name: {}
f:name: {}
k:{"name":"tls-certs"}:
.: {}
f:configMap:
.: {}
f:defaultMode: {}
f:name: {}
f:name: {}
k:{"name":"tmp"}:
.: {}
f:emptyDir: {}
f:name: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-09-15T14:32:31Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:template:
f:spec:
f:containers:
k:{"name":"argocd-repo-server"}:
f:env:
k:{"name":"HELM_CACHE_HOME"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"HELM_CONFIG_HOME"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"HELM_DATA_HOME"}:
.: {}
f:name: {}
f:value: {}
f:image: {}
f:volumeMounts:
k:{"mountPath":"/helm-working-dir"}:
.: {}
f:mountPath: {}
f:name: {}
f:volumes:
k:{"name":"helm-working-dir"}:
.: {}
f:emptyDir: {}
f:name: {}
manager: kubectl
operation: Update
time: "2021-11-05T14:36:06Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:availableReplicas: {}
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:type: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
time: "2021-11-10T09:06:50Z"
name: argocd-repo-server
namespace: argocd
resourceVersion: "189490582"
uid: 50004fba-ced4-4081-8e2d-11f0de6c0813
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: argocd-repo-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/name: argocd-repo-server
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: argocd-repo-server
topologyKey: failure-domain.beta.kubernetes.io/zone
weight: 100
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: argocd-repo-server
topologyKey: kubernetes.io/hostname
automountServiceAccountToken: false
containers:
- command:
- entrypoint.sh
- argocd-repo-server
- --redis
- argocd-redis-ha-haproxy:6379
env:
- name: ARGOCD_EXEC_TIMEOUT
value: 5m
- name: ARGOCD_RECONCILIATION_TIMEOUT
valueFrom:
configMapKeyRef:
key: timeout.reconciliation
name: argocd-cm
optional: true
- name: ARGOCD_REPO_SERVER_LOGFORMAT
valueFrom:
configMapKeyRef:
key: reposerver.log.format
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_REPO_SERVER_LOGLEVEL
valueFrom:
configMapKeyRef:
key: reposerver.log.level
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_REPO_SERVER_PARALLELISM_LIMIT
valueFrom:
configMapKeyRef:
key: reposerver.parallelism.limit
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_REPO_SERVER_DISABLE_TLS
valueFrom:
configMapKeyRef:
key: reposerver.disable.tls
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_TLS_MIN_VERSION
valueFrom:
configMapKeyRef:
key: reposerver.tls.minversion
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_TLS_MAX_VERSION
valueFrom:
configMapKeyRef:
key: reposerver.tls.maxversion
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_TLS_CIPHERS
valueFrom:
configMapKeyRef:
key: reposerver.tls.ciphers
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_REPO_CACHE_EXPIRATION
valueFrom:
configMapKeyRef:
key: reposerver.repo.cache.expiration
name: argocd-cmd-params-cm
optional: true
- name: REDIS_SERVER
valueFrom:
configMapKeyRef:
key: redis.server
name: argocd-cmd-params-cm
optional: true
- name: REDISDB
valueFrom:
configMapKeyRef:
key: redis.db
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_DEFAULT_CACHE_EXPIRATION
valueFrom:
configMapKeyRef:
key: reposerver.default.cache.expiration
name: argocd-cmd-params-cm
optional: true
- name: HELM_CACHE_HOME
value: /helm-working-dir
- name: HELM_CONFIG_HOME
value: /helm-working-dir
- name: HELM_DATA_HOME
value: /helm-working-dir
image: quay.io/argoproj/argocd:v2.1.6
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz?full=true
port: 8084
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
name: argocd-repo-server
ports:
- containerPort: 8081
protocol: TCP
- containerPort: 8084
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8084
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
runAsNonRoot: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /app/config/ssh
name: ssh-known-hosts
- mountPath: /app/config/tls
name: tls-certs
- mountPath: /app/config/gpg/source
name: gpg-keys
- mountPath: /app/config/gpg/keys
name: gpg-keyring
- mountPath: /app/config/reposerver/tls
name: argocd-repo-server-tls
- mountPath: /tmp
name: tmp
- mountPath: /helm-working-dir
name: helm-working-dir
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: argocd-ssh-known-hosts-cm
name: ssh-known-hosts
- configMap:
defaultMode: 420
name: argocd-tls-certs-cm
name: tls-certs
- configMap:
defaultMode: 420
name: argocd-gpg-keys-cm
name: gpg-keys
- emptyDir: {}
name: gpg-keyring
- emptyDir: {}
name: tmp
- emptyDir: {}
name: helm-working-dir
- name: argocd-repo-server-tls
secret:
defaultMode: 420
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- key: ca.crt
path: ca.crt
optional: true
secretName: argocd-repo-server-tls
status:
availableReplicas: 2
conditions:
- lastTransitionTime: "2021-07-01T13:58:33Z"
lastUpdateTime: "2021-11-05T14:37:17Z"
message: ReplicaSet "argocd-repo-server-64986946d" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2021-11-10T09:06:50Z"
lastUpdateTime: "2021-11-10T09:06:50Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 5
readyReplicas: 2
replicas: 2
updatedReplicas: 2
And my controller
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"application-controller","app.kubernetes.io/name":"argocd-application-controller","app.kubernetes.io/part-of":"argocd"},"name":"argocd-application-controller","namespace":"argocd"},"spec":{"replicas":1,"selector":{"matchLabels":{"app.kubernetes.io/name":"argocd-application-controller"}},"serviceName":"argocd-application-controller","template":{"metadata":{"labels":{"app.kubernetes.io/name":"argocd-application-controller"}},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchLabels":{"app.kubernetes.io/name":"argocd-application-controller"}},"topologyKey":"kubernetes.io/hostname"},"weight":100},{"podAffinityTerm":{"labelSelector":{"matchLabels":{"app.kubernetes.io/part-of":"argocd"}},"topologyKey":"kubernetes.io/hostname"},"weight":5}]}},"containers":[{"command":["argocd-application-controller","--status-processors","20","--operation-processors","10","--redis","argocd-redis-ha-haproxy:6379"],"env":[{"name":"ARGOCD_RECONCILIATION_TIMEOUT","valueFrom":{"configMapKeyRef":{"key":"timeout.reconciliation","name":"argocd-cm","optional":true}}},{"name":"ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER","valueFrom":{"configMapKeyRef":{"key":"repo.server","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_TIMEOUT_SECONDS","valueFrom":{"configMapKeyRef":{"key":"controller.repo.server.timeout.seconds","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_APPLICATION_CONTROLLER_STATUS_PROCESSORS","valueFrom":{"configMapKeyRef":{"key":"controller.status.processors","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_APPLICATION_CONTROLLER_OPERATION_PROCESSORS","valueFrom":{"configMapKeyRef":{"key":"controller.operation.processors","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_APPLICATION_CONTROLLER_LOGFORMAT","valueFrom":{"configMapKeyRef":{"key":"controller.log.format","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_APPLICATION_CONTROLLER_LOGLEVEL","valueFrom":{"configMapKeyRef":{"key":"controller.log.level","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_APPLICATION_CONTROLLER_METRICS_CACHE_EXPIRATION","valueFrom":{"configMapKeyRef":{"key":"controller.metrics.cache.expiration","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_TIMEOUT_SECONDS","valueFrom":{"configMapKeyRef":{"key":"controller.self.heal.timeout.seconds","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT","valueFrom":{"configMapKeyRef":{"key":"controller.repo.server.plaintext","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_STRICT_TLS","valueFrom":{"configMapKeyRef":{"key":"controller.repo.server.strict.tls","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_APP_STATE_CACHE_EXPIRATION","valueFrom":{"configMapKeyRef":{"key":"controller.app.state.cache.expiration","name":"argocd-cmd-params-cm","optional":true}}},{"name":"REDIS_SERVER","valueFrom":{"configMapKeyRef":{"key":"redis.server","name":"argocd-cmd-params-cm","optional":true}}},{"name":"REDISDB","valueFrom":{"configMapKeyRef":{"key":"redis.db","name":"argocd-cmd-params-cm","optional":true}}},{"name":"ARGOCD_DEFAULT_CACHE_EXPIRATION","valueFrom":{"configMapKeyRef":{"key":"controller.default.cache.expiration","name":"argocd-cmd-params-cm","optional":true}}}],"image":"quay.io/argoproj/argocd:v2.1.6","imagePullPolicy":"Always","livenessProbe":{"httpGet":{"path":"/healthz","port":8082},"initialDelaySeconds":5,"periodSeconds":10},"name":"argocd-application-controller","ports":[{"containerPort":8082}],"readinessProbe":{"httpGet":{"path":"/healthz","port":8082},"initialDelaySeconds":5,"periodSeconds":10},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["all"]},"readOnlyRootFilesystem":true,"runAsNonRoot":true},"volumeMounts":[{"mountPath":"/app/config/controller/tls","name":"argocd-repo-server-tls"},{"mountPath":"/home/argocd","name":"argocd-home"}],"workingDir":"/home/argocd"}],"serviceAccountName":"argocd-application-controller","volumes":[{"emptyDir":{},"name":"argocd-home"},{"name":"argocd-repo-server-tls","secret":{"items":[{"key":"tls.crt","path":"tls.crt"},{"key":"tls.key","path":"tls.key"},{"key":"ca.crt","path":"ca.crt"}],"optional":true,"secretName":"argocd-repo-server-tls"}}]}}}}
creationTimestamp: "2021-08-17T10:40:00Z"
generation: 3
labels:
app.kubernetes.io/component: application-controller
app.kubernetes.io/name: argocd-application-controller
app.kubernetes.io/part-of: argocd
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations: {}
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/part-of: {}
f:spec:
f:podManagementPolicy: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:serviceName: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app.kubernetes.io/name: {}
f:spec:
f:affinity:
.: {}
f:podAntiAffinity:
.: {}
f:preferredDuringSchedulingIgnoredDuringExecution: {}
f:containers:
k:{"name":"argocd-application-controller"}:
.: {}
f:command: {}
f:env:
.: {}
k:{"name":"ARGOCD_APP_STATE_CACHE_EXPIRATION"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_APPLICATION_CONTROLLER_LOGFORMAT"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_APPLICATION_CONTROLLER_LOGLEVEL"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_APPLICATION_CONTROLLER_METRICS_CACHE_EXPIRATION"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_APPLICATION_CONTROLLER_OPERATION_PROCESSORS"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_STRICT_TLS"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_TIMEOUT_SECONDS"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_TIMEOUT_SECONDS"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_APPLICATION_CONTROLLER_STATUS_PROCESSORS"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_DEFAULT_CACHE_EXPIRATION"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"ARGOCD_RECONCILIATION_TIMEOUT"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"REDIS_SERVER"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
k:{"name":"REDISDB"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:configMapKeyRef:
.: {}
f:key: {}
f:name: {}
f:optional: {}
f:imagePullPolicy: {}
f:livenessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":8082,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:protocol: {}
f:readinessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:resources: {}
f:securityContext:
.: {}
f:allowPrivilegeEscalation: {}
f:capabilities:
.: {}
f:drop: {}
f:readOnlyRootFilesystem: {}
f:runAsNonRoot: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:volumeMounts:
.: {}
k:{"mountPath":"/app/config/controller/tls"}:
.: {}
f:mountPath: {}
f:name: {}
k:{"mountPath":"/home/argocd"}:
.: {}
f:mountPath: {}
f:name: {}
f:workingDir: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:serviceAccount: {}
f:serviceAccountName: {}
f:terminationGracePeriodSeconds: {}
f:volumes:
.: {}
k:{"name":"argocd-home"}:
.: {}
f:emptyDir: {}
f:name: {}
k:{"name":"argocd-repo-server-tls"}:
.: {}
f:name: {}
f:secret:
.: {}
f:defaultMode: {}
f:items: {}
f:optional: {}
f:secretName: {}
f:updateStrategy:
f:rollingUpdate:
.: {}
f:partition: {}
f:type: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-09-15T14:32:31Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:template:
f:spec:
f:containers:
k:{"name":"argocd-application-controller"}:
f:image: {}
manager: kubectl
operation: Update
time: "2021-11-05T14:36:06Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:collisionCount: {}
f:currentReplicas: {}
f:currentRevision: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updateRevision: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
time: "2021-11-09T18:39:01Z"
name: argocd-application-controller
namespace: argocd
resourceVersion: "188101727"
uid: 79bcb7b6-0fc2-41f2-81bf-946236fdf095
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: argocd-application-controller
serviceName: argocd-application-controller
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/name: argocd-application-controller
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: argocd-application-controller
topologyKey: kubernetes.io/hostname
weight: 100
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/part-of: argocd
topologyKey: kubernetes.io/hostname
weight: 5
containers:
- command:
- argocd-application-controller
- --status-processors
- "20"
- --operation-processors
- "10"
- --app-resync
- "180"
- --self-heal-timeout-seconds
- "5"
- --repo-server
- argocd-repo-server:8081
- --repo-server-timeout-seconds
- "500"
- --redis
- argocd-redis-ha-haproxy:6379
- --repo-server-timeout-seconds
- "500"
env:
- name: ARGOCD_RECONCILIATION_TIMEOUT
valueFrom:
configMapKeyRef:
key: timeout.reconciliation
name: argocd-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER
valueFrom:
configMapKeyRef:
key: repo.server
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_TIMEOUT_SECONDS
valueFrom:
configMapKeyRef:
key: controller.repo.server.timeout.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_STATUS_PROCESSORS
valueFrom:
configMapKeyRef:
key: controller.status.processors
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_OPERATION_PROCESSORS
valueFrom:
configMapKeyRef:
key: controller.operation.processors
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_LOGFORMAT
valueFrom:
configMapKeyRef:
key: controller.log.format
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_LOGLEVEL
valueFrom:
configMapKeyRef:
key: controller.log.level
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_METRICS_CACHE_EXPIRATION
valueFrom:
configMapKeyRef:
key: controller.metrics.cache.expiration
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_SELF_HEAL_TIMEOUT_SECONDS
valueFrom:
configMapKeyRef:
key: controller.self.heal.timeout.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_PLAINTEXT
valueFrom:
configMapKeyRef:
key: controller.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_CONTROLLER_REPO_SERVER_STRICT_TLS
valueFrom:
configMapKeyRef:
key: controller.repo.server.strict.tls
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APP_STATE_CACHE_EXPIRATION
valueFrom:
configMapKeyRef:
key: controller.app.state.cache.expiration
name: argocd-cmd-params-cm
optional: true
- name: REDIS_SERVER
valueFrom:
configMapKeyRef:
key: redis.server
name: argocd-cmd-params-cm
optional: true
- name: REDISDB
valueFrom:
configMapKeyRef:
key: redis.db
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_DEFAULT_CACHE_EXPIRATION
valueFrom:
configMapKeyRef:
key: controller.default.cache.expiration
name: argocd-cmd-params-cm
optional: true
image: quay.io/argoproj/argocd:v2.1.6
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8082
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: argocd-application-controller
ports:
- containerPort: 8082
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8082
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
runAsNonRoot: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /app/config/controller/tls
name: argocd-repo-server-tls
- mountPath: /home/argocd
name: argocd-home
workingDir: /home/argocd
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: argocd-application-controller
serviceAccountName: argocd-application-controller
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: argocd-home
- name: argocd-repo-server-tls
secret:
defaultMode: 420
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- key: ca.crt
path: ca.crt
optional: true
secretName: argocd-repo-server-tls
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
status:
collisionCount: 0
currentReplicas: 1
currentRevision: argocd-application-controller-b8fdd68dc
observedGeneration: 3
readyReplicas: 1
replicas: 1
updateRevision: argocd-application-controller-b8fdd68dc
updatedReplicas: 1
Still getting
rpc error: code = Unknown desc = Manifest generation error (cached): `helm dependency build` failed exit status 1: Error: unable to move current charts to tmp dir: link error: cannot rename charts to tmpcharts: rename charts tmpcharts: file exists
Sad 😢
@kshamajain99 doesn't seem to help (see ⬆️ ) ArgoCD is pretty central to my system...it's pretty much crushing my platform...any workaround and/or modification I can do to avoid this...is this a scale thing ? ArgoCD doesn't seem to work very hard at all (CPU consumption seems really low)
Actually scratch that @kshamajain99 - @gzur solution seems to do the job...
ArgoCD did become a lot slower in "converging" the state...but it's working now...nothing becomes "Unknown" due to the helm dep
thing...Any idea when will this be actually solved ? (i.e. when is helm
3.7 is going to be merged to ArgoCD)
Helm 3.7.1 is merged in argocd master and should be part of next release (2.1.7) https://github.com/argoproj/argo-cd/commit/2770c690a5597fcbab344cd2ad494c918472bdd1
@zonnie only fix I've found "until 2.1.7 hopefullyn solves it with Helm 3.7.1" - is to simply KILL all argocd repo pods - and that will flush the cache which causes the problem. The same way I have to kill all application-controller pods when sync hangs forever (happens with kube-prometheus chart f.ex.)
...Any idea when will this be actually solved ? (i.e. when is
helm
3.7 is going to be merged to ArgoCD)
The fix for the underlying Helm issue was released in 3.7.x
back September and has already been merged into Argo-CD:
https://github.com/argoproj/argo-cd/blob/2770c690a5597fcbab344cd2ad494c918472bdd1/hack/tool-versions.sh#L12
Are you by any chance using Helm 2
charts?
The below example, AFAIK, is helm
v3 correct ?
apiVersion: v2
is for helm
3 while apiVersion: v1
is for helm
2 - correct ?
apiVersion: v2
name: secrets-app
description: App that contains secrets
type: application
version: 0.1.0
@gzur ⬆️
Checklist:
argocd version
.Describe the bug
Some
Application
s based onhelm
fail to deploy due to somekind of internal filesystem issue.For example, one of the apps that are in
Unknown
statesThis doesn't eventually resolve itself, it's stays this way...
To Reproduce
I'm not sure how to reproduce, this happens from time to time and causes complete deadlock My
Chart.yaml
My app-of-apps
My
template
Expected behavior
The
Application
should be deployed successfullyScreenshots
Version
Logs
Logs from the
argocd-application-controller
Logs from
argocd-repo-server