argoproj / argo-cd

Declarative Continuous Deployment for Kubernetes
https://argo-cd.readthedocs.io
Apache License 2.0
18.04k stars 5.51k forks source link

Resource tree slow refresh #8172

Open klamkma opened 2 years ago

klamkma commented 2 years ago

Hello,

Describe the bug

We have a big kubernetes cluster with almost 3000 argocd applications. Currently we are running ArgoCD 2.2.2. Since upgrade to version 2 we noticed that refresh of the resource tree for applications is much slower. For example: I click on "Restart" for a deployment ReplicaSet appears immediately New pod appears sometimes after 40 seconds I've tried increasing values --status-processors, --operation-processors, --kubectl-parallelism-limit for the controller, but it does not help. Any idea what could we do? Which component is responsible for this refresh, is it argocd-server?

To Reproduce

I click on "Restart" for a deployment ReplicaSet appears immediately New pod appears sometimes after 40 seconds

Expected behavior

Pods should appear faster.

Version

argocd: v2.2.2+03b17e0
  BuildDate: 2022-01-01T06:27:52Z
  GitCommit: 03b17e0233e64787ffb5fcf65c740cc2a20822ba
  GitTreeState: clean
  GoVersion: go1.16.11
  Compiler: gc
  Platform: linux/amd64

Thank you.

yydzhou commented 2 years ago

We are same here too. Previously we had 4800+ applications the argocd handles them pretty well, although with some slowness on application listing. After some re-org, we have 3000+ applications now. However, since upgraded v2, the refresh and the sync become very very slow. The refresh action, which supposed to be done in a few seconds can run up to 2 minutes. The sync waiting is even slower. compared to previous using experience, I believe there are large room for performance tuning/improvement.

alexmt commented 2 years ago

It is really difficult to troubleshoot it remotelly. The controller might be CPU throttled, repo server might need to be scaled up or control plane K8S API server might be slow.

@klamkma , @yydzhou if possible can we have an interactive session (e.g. zoom call) and debug it together. Later we could document changes we've made to help anyone else who faces this issue.

yeya24 commented 2 years ago

Thank you @alexmt. Would be great to have a debug session together with @yydzhou.

klamkma commented 2 years ago

Hello, I'm available for a session too. Thank you @alexmt.

klamkma commented 2 years ago

Hi again,

I enabled ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM. Could you give me some tips how to use it to investigate performance issues?

Thank you

leotomas837 commented 1 year ago

Any update about this ? We are experiencing the same issue. It may be a duplicate from this issue.

There is enough RAM, CPU, disk space, and we tried multiplying the number of replicas of the controller and the server pods by 4 just to see if it helps, but not at all.

jujubetsz commented 1 year ago

I have the same problem. 2.5k Apps, helm, Argo v2.6.6, 1 very big cluster (HML). I cant see any problem like throttle, OOM's or resource starvation. Did all recommended tunning for high performance. Argocd have a pool with some big nodes just for it to play. Tomorrow i will try to debug the kubernetes cluster to see if the control plane is ok.

AnubhavSabarwal commented 1 year ago

Is there any solution for this, we have somewhere around 6000 applications and argocd version is 2.7.2

  1. Sync and refresh is very slow
  2. Restart and Delete of replicas or deployment doesn't show on ARGO UI.
  3. Whenever you delete deployment, pod or replicas always say doesn't exist on ARGOUI
klamkma commented 1 year ago

Hi, For us we had a huge improvement in the UI by enabling --enable-gzip, but still pod refresh is very slow.

evs-ops commented 10 months ago

Any news? We have the same problem with even a smaller cluster of about 1000 apps and 6 clusters. I think it might be related to the fact that we have about 5 or 6 plugins but thats not a huge cluster. Any thoughts?

jujubetsz commented 10 months ago

@evs-ops, Hi.

I’ve tried every possible tunning and version in Argocd and got no improvements…. Since my cluster is running in OpenStack/Rancher inside my company cloud, i’m now improving the cluster itself.. Upgrading kubernetes version, etcd performance etc. I’m doing this because i’m seeing lots of timeouts to kubernetes in application-controller and also because none of the tunning worked. logs:

time=“2024-01-08T15:31:23Z” level=info msg=“Failed to watch Deployment.apps on https://x.x.x.x:443: Resyncing Deployment.apps on https://x.x.x.x:443 due to timeout, retrying in 1s” server=“https://kubernetes.default.svc” time=“2024-01-08T15:34:15Z” level=info msg=“Failed to watch Secret on https://x.x.x.x:443: Resyncing Secret on https://x.x.x.x:443 due to timeout, retrying in 1s” server=“https://kubernetes.default.svc” time=“2024-01-08T15:35:20Z” level=info msg=“Failed to watch ReplicationController on https://x.x.x.x:443: Resyncing ReplicationController on https://x.x.x.x:443 due to timeout, retrying in 1s” server=“https://kubernetes.default.svc”

The symptoms i’m experimenting are:

The navigation in argocd web is fast as expected, but if i delete a pod, for exemple, nothing happens. The box with that pod persist in the argocd frontend, but if i watch the namespace using kubectl the pod is beeing killed and a new pod is beeing scheduled. After several minutes (10 ~15m) the new pod spawn in argocd front. This happen with every object owned by argocd.

evs-ops commented 10 months ago

Hi, Very similar to my problem. I can delete and it would probably take about 5 to 10 min. Refresh no less then 2 min up to 5 min. It's new to me since in my previous roles I used argo and it was lightning fast :(

jujubetsz commented 10 months ago

Hi,

Having more clusters to manage is not a bad thing in my point of view. You can have one replica of application-controller for each cluster. Here are some docs and posts that may help you:

https://www.infracloud.io/blogs/sharding-clusters-across-argo-cd-application-controller-replicas/ https://argo-cd.readthedocs.io/en/stable/operator-manual/high_availability/#argocd-application-controller

Did you tried that?

Another question: Your clusters are managed (GKE, EKS etc) or is the same as me: Self deployed and managed?

jujubetsz commented 10 months ago

@evs-ops,

I bumped my version to v2.10.0-rc4 in order to test jitter implementation on reconciliation. You can check the proposal and description issues/14241.

The results till now are incredible, no delay at all in ArgoCD UI/Front. If i delete a pod, the new pod appears instantly, so i recommend you to try if possible. I bumped the version this morning and got an stable environment so far. Will update this thread if something new happens.

Some general info about my environment:

2.7k Apps Lots of monorepos, each team/tribe have one ranging from 10 to 200 apps Only one Cluster Kubernetes v1.25 running in Openstack/Rancher in private cloud ArgoCD components have tons of resources to use Reconciliation timeout: 600s Reconciliation jitter: 180s

git-requests cpu-usage-total network-total reconciliation-activity cluster-events reconciliation-performance

machine3 commented 10 months ago

Have you found the reason?

ritheshgm commented 9 months ago

+1

When running 3,000 applications and engaging in activities such as syncing 200 applications, clicking "Restart" for a deployment immediately displays the ReplicaSet, but new pods may take up to two minutes to appear.

machine3 commented 9 months ago

Does anyone have any ideas for solving the problem, or a temporary solution?

CryptoTr4der commented 8 months ago

Same problem here. The refresh is very slow (~3-5m) per Application. Even with version 2.10.2 1 GIT Repo (monorepo) with ~50 applications CMP argocd-vault-plugin as sidecar deployed

Tried many things, but nothing helps atm

gazidizdaroglu commented 5 months ago

When running 3,000 applications and engaging in activities such as syncing 200 applications, clicking "Restart" for a deployment immediately displays the ReplicaSet, but new pods may take up to two minutes to appear.

+1

daftping commented 5 months ago

We are encountering a similar issue. In large clusters where Argo CD monitors numerous resources, it is significantly slow in processing watches—taking approximately 7 minutes in our case. Consequently, the Argo CD UI displays outdated information and adversely affects several functionalities that depend on sync waves, such as PruneLast. Eventually, the volume of events from the cluster overwhelmed the system, causing Argo CD to stall completely.

To mitigate this, we disabled tracking of Pods and ReplicaSets, which unfortunately diminishes one of the primary advantages of the Argo CD UI. We also disregarded all irrelevant events and attempted to optimize various settings in the application controller. However, scaling the application controller vertically showed no effect, and horizontal scaling is not feasible for a single cluster due to sharding constraints.

CryptoTr4der commented 5 months ago

We have removed all argocd config plugins (switched from argocd-vault-plugin to vault-secrets-webhook) and now everything seems to work smoothly

gazidizdaroglu commented 4 months ago

Hey, this thread can help you as well!

https://cloud-native.slack.com/archives/C01TSERG0KZ/p1721141931660909

machine3 commented 4 months ago

Hey, this thread can help you as well!

https://cloud-native.slack.com/archives/C01TSERG0KZ/p1721141931660909

I'm sorry, I can't access the link you provided. Could you please share some details with me?

mpelekh commented 3 months ago

We are encountering a similar issue. In large clusters where Argo CD monitors numerous resources, it is significantly slow in processing watches—taking approximately 7 minutes in our case. Consequently, the Argo CD UI displays outdated information and adversely affects several functionalities that depend on sync waves, such as PruneLast. Eventually, the volume of events from the cluster overwhelmed the system, causing Argo CD to stall completely.

To mitigate this, we disabled tracking of Pods and ReplicaSets, which unfortunately diminishes one of the primary advantages of the Argo CD UI. We also disregarded all irrelevant events and attempted to optimize various settings in the application controller. However, scaling the application controller vertically showed no effect, and horizontal scaling is not feasible for a single cluster due to sharding constraints.

We are observing precisely the same issue you described. ArgoCD v2.10.9. @daftping, did you find a way to resolve the issue without disabling tracking pods and replica sets?

andrii-korotkov-verkada commented 3 months ago

The fix is there on master and would be a part of v2.13. It optimizes getting resource tree dfs from O(<tree_size> * <namespace_resource_count>) to O(<namespace_resource_count>)

mpelekh commented 3 months ago

Hi @andrii-korotkov-verkada. Thanks for replying Do you mean the following fixes?

Thanks for your contribution. IterateHierarchyV2 looks promising.

I actually patched v2.10.9 with the above commits. It helped, but not to the very end.

Even though patches significantly improve performance, Argo CD still can not handle the load from large clusters.

In the screenshot, you can see one of the largest clusters. Here, the patched with the above commits v2.10.9 build is running.

As can be seen, once pods and rs are enabled to be tracked, the cluster event count falls close to zero, and reconciliation time increases drastically.

Screenshot 2024-08-09 at 20 40 44

Screenshot 2024-08-09 at 20 51 00

Number of pods in cluster: ~76k Number of rs in cluster: ~52k

@andrii-korotkov-verkada Do you have any ideas on what can be improved?

crenshaw-dev commented 3 months ago

Are you hitting CPU throttling?

mpelekh commented 3 months ago

@crenshaw-dev No, we don't set CPU limits at all and still have plenty of resources on the node.

We found that the potential reason is lock contention.

Here, I added a few more metrics and found out that when the number of events is significant, sometimes it takes ~5 minutes to acquire a lock, which leads to a delay in reconciliation. https://github.com/mpelekh/gitops-engine/commit/560ef00bcce9201083200f906f15bf1716fbfcc0#diff-9c9e197d543705f08c9b1bc2dc404a55506cfc2935a988e6007d248257aadb1aR1372

Screenshot 2024-08-09 at 21 11 33

NOTE: The following metrics we got in 2.10.9 patched with the following commits:

andrii-korotkov-verkada commented 3 months ago

I had this attempt https://github.com/argoproj/gitops-engine/issues/602, but the benchmark showed neutral-to-regression in terms of throughput. But maybe average latency can get better, idk.

crenshaw-dev commented 3 months ago

I'm curious how much of a performance win you saw from just IterateHierarchy, @mpelekh. Those changes are mostly useful for situations where you have a ton of resources in a single namespace.

Am also super curious if Andrii's locking improvements help with this. If so, that's a strong case for merging those changes.

mpelekh commented 3 months ago

I'm curious how much of a performance win you saw from just IterateHierarchy

@crenshaw-dev The comparison is as follows:

Large cluster

Number of pods in cluster: ~76k Number of rs in cluster: ~52k

v2.10.9 without improvements, only additional metrics are added (deployed at 18:00 according to Grafana charts)

Pods and replica sets are enabled to be watched at 18:25.

Screenshot 2024-08-09 at 22 42 54 Screenshot 2024-08-09 at 22 42 16 Screenshot 2024-08-09 at 22 41 50

v2.10.9 with improvements (https://github.com/argoproj/gitops-engine/commit/6b2984ebc47085852a7b63a0fd0b73c52e986217 and https://github.com/argoproj/argo-cd/commit/267f243a899483fea0a4e6a613c18f62bd342c7e) and additional metrics (deployed at 21:30 according to Grafana charts)

Screenshot 2024-08-09 at 22 57 32 Screenshot 2024-08-09 at 22 57 12

Even in the enormous cluster, a very tidy performance improvement can be observed with the v2.10.9 with IterateHierarchyV2 (cluster events count is not completely zero; it's ~3-5k).

Smaller cluster

Here are the results from a smaller cluster (compared to the previous one). Pods and replica sets are watched. Pods: ~18k ReplicaSets: ~17k

Screenshot 2024-08-09 at 22 14 33 Screenshot 2024-08-09 at 22 14 44 Screenshot 2024-08-09 at 22 15 00 Screenshot 2024-08-09 at 22 15 27

Before 21:45, the v2.10.9 version from upstream was running. After 21:45, the patched v2.10.9 version with additional metrics and with the following commits was running.

This is the case when IterateHierarchyV2 improves performance significantly.

crenshaw-dev commented 3 months ago

Gotcha. So IterateHierarchy gets us ~90% of the way there, but on a huge cluster we'll still have significant lock contention.

mpelekh commented 3 months ago

Am also super curious if Andrii's locking improvements help with this. If so, that's a strong case for merging those changes.

@crenshaw-dev I am going to create a patched v2.10.9 image with additional metrics and the following fixes:

I will share the results once I test it in a large cluster. FYI @andrii-korotkov-verkada

andrii-korotkov-verkada commented 3 months ago

If the pods and replica sets are excluded from tracking, would they not even show up in Argo UI, or would it just make them potentially stale?

mpelekh commented 3 months ago

If the pods and replica sets are excluded from tracking, would they not even show up in Argo UI, or would it just make them potentially stale?

@andrii-korotkov-verkada If the pods and replica sets are excluded from tracking, they are not visible in the ArgoCD UI; only deployment is visible, and nothing underneath.

mpelekh commented 3 months ago

@crenshaw-dev I am going to create a patched v2.10.9 image with additional metrics and the following fixes:

I will share the results once I test it in a large cluster. FYI @andrii-korotkov-verkada

As we agreed, I tested the patched v2.10.9 build with the following fixes:

tl;dr

The results are almost the same as with only IterateHierarchyV2 improvement. Once the pods and replica sets are enabled for tracking, the Cluster Event Count falls close to zero. The logs demonstrate that even though we added the changes that optimized the lock usage, we still have significant lock contention.

Details

The patched image has been deployed to one of the most largest clusters, where the pods and replica sets are disabled from tracking.

Screenshot 2024-08-13 at 15 38 23

Please take a look at where one of the additional logs has been added - https://github.com/mpelekh/gitops-engine/blob/e773bed14ca188333ce5f3aa9ca08ab582eff360/pkg/cache/cluster.go#L1429. This log shows how much time it takes to acquire a lock.

The results are as follows.

Pods and ReplicaSets are disabled from tracking

time to gather logs - from 2024-08-12T15:13:10Z to 2024-08-12T15:24:21Z
total number of processed events during that time - 76482
from 0ms to 1000ms - 75678
from 1000ms to 10000ms - 795
from 10000ms to 20000ms - 9
from 20000ms to 30000ms - 0
from 30000ms to 40000ms - 0
from 40000ms to 50000ms - 0
from 50000ms to 60000ms - 0
from 60000ms and higher - 0

Enable ReplicaSets to be tracked. The pods are still excluded.

APIs count increased to 85 Resource count became ~130k

time to gather logs - from 2024-08-12T15:25:41Z to 2024-08-12T15:52:27Z
total number of processed events during that time - 123501
from 0ms to 1000ms - 120403
from 1000ms to 10000ms - 3085
from 10000ms to 20000ms - 4
from 20000ms to 30000ms - 3
from 30000ms to 40000ms - 6
from 40000ms to 50000ms - 0
from 50000ms to 60000ms - 0
from 60000ms and higher - 0

Include ReplicaSets and Pods for watching

Screenshot 2024-08-12 at 19 24 29

time to gather logs - from 2024-08-12T16:18:58Z to 2024-08-12T16:38:10Z
total number of processed events during that time - 14856
from 0ms to 1000ms -    11006
from 1000ms to 10000ms - 3812
from 10000ms to 20000ms - 8
from 20000ms to 30000ms - 15
from 30000ms to 40000ms - 1
from 40000ms to 50000ms - 5
from 50000ms to 60000ms -  4
from 60000ms and higher - 5

We see that the number of processed events decreased significantly when the ReplicaSets and Pods were included for watching. The logs demonstrate that even though we added the changes that optimized the lock usage, we still have significant lock contention.

Do you have any thoughts regarding how we can better optimize the lock usage in the cluster so as to handle such a huge number of resources (~210k when RS and Pods are included)?

FYI @crenshaw-dev @andrii-korotkov-verkada

mpelekh commented 3 months ago

Has anyone tried changing the global lock approach by fine-grained locking to avoid lock contentions?

andrii-korotkov-verkada commented 3 months ago

I don't have thoughts how to optimize lock usage unfortunately. The approach I'm learning about now is a cell architecture with multiple clusters https://github.com/argoproj/argo-cd/discussions/19607.

Ga13Ou commented 2 months ago

We are experiencing slowness issue on our bigger clusters as well. It would be really helpful for debugging if those temporary lock metrics are added to the metrics exported by Argo

mpelekh commented 1 month ago

@crenshaw-dev This PR https://github.com/argoproj/gitops-engine/pull/629 resolves the problem described above. FYI @andrii-korotkov-verkada

andrii-korotkov-verkada commented 2 weeks ago

Version 2.13.0 has optimizations mentioned. @klamkma, can you upgrade and let us know if it solves the issue, please?

klamkma commented 2 weeks ago

@andrii-korotkov-verkada, your PRs are still open: https://github.com/argoproj/argo-cd/pull/20329 https://github.com/argoproj/gitops-engine/pull/629

Are you sure there is any fix in version 2.13.0?

Thanks!

andrii-korotkov-verkada commented 2 weeks ago

Lock contention fix idea didn't work, but another idea with a more optimal tree construction did.

andrii-korotkov-verkada commented 2 weeks ago

The linked PRs are not mine though.

klamkma commented 2 weeks ago

Lock contention fix idea didn't work, but another idea with a more optimal tree construction did.

Thanks for reply, I will let you know when we have it tested.

mpelekh commented 2 weeks ago

@klamkma, The following fixes from @andrii-korotkov-verkada resolve the issue for most cases, and they are part of the v2.13 release:

But on huge clusters, we still have significant lock contention, which leads to performance issues. The following PRs aim to resolve the performance issue on large clusters not observed on smaller clusters. More details about it here - https://github.com/argoproj/gitops-engine/pull/629#issuecomment-2393085365

These PRs are still open, and I am focused on making them ready to be merged.

vladst3f commented 1 week ago

hi @mpelekh, any ETA on when they'd get merged ? cheers

mpelekh commented 4 days ago

hi @mpelekh, any ETA on when they'd get merged ? cheers

@vladst3f Thanks for reaching out! I'm actively working on it and am interested in getting it merged. I'll keep you updated and aim to have it ready soon. I appreciate your patience and enthusiasm!