kubernetes-sigs / cluster-api

Home for Cluster API, a subproject of sig-cluster-lifecycle
https://cluster-api.sigs.k8s.io
Apache License 2.0
3.52k stars 1.3k forks source link

Improve ClusterCacheTracker TryLock behavior #10819

Open sbueringer opened 3 months ago

sbueringer commented 3 months ago

ClusterCacheTracker today allows only one controller worker at a time to retrieve a client. If a second controller worker tries it at the same time, it will get an ErrClusterLocked error. This usually leads to a log like this "Requeuing because another worker has the lock on the ClusterCacheTracker" (log level 5) and a requeue.

This was introduced for the case where a workload cluster is not reachable. In that case, when we try to create a client with the CCT the client creation times out after 10 seconds. In this scenario we wanted to block at most one worker and not deadlock entire controllers.

I think the current behavior is not ideal in so far that TryLock just immediately fails/returns. Ideally we would try get the lock for a small period of time so we end up with the following results:

Happy path (cluster is reachable, we can create a client):

Un-happy path (cluster is not reachable, client creation times out after 10s):

Or maybe we should re-think the whole mechanism and come up with something different :)

fabriziopandini commented 3 months ago

/assign

added to my pipeline, please reach out if someone wants to give a try before I get to it.

fabriziopandini commented 2 months ago

Another idea: we can take the opportunity to think about remote connection management in a more holistic way.

e.g. We can assign to CCT the responsibility to create remote clients, and let the other controllers use them if available. This could probably remove consideration about lock from other controllers (the client is there or not, no need to lock); this approach most probably requires some mechanism to trigger reconciliations when a remote client becomes available.

sbueringer commented 2 months ago

Absolutely in favor!

the client is there or not, no need to lock

I think we have to lock, but we can split between read and write locks, which should help a lot (the huge improvement comes from that no regular controller will be blocked for up to 10s when trying to create a client and timing out)

most probably requires some mechanism to trigger reconciliations

Today we just requeue with 1m if we hit ErrClusterLocked, We could do something similar if the client is not available.

sbueringer commented 3 weeks ago

/assign