Open stevenzzzz opened 4 years ago
/cc @antoniovicente
An explicit goal would be to be able to share connections across clusters that share a ConnectionPool. Different LB algorithms may not be required. In fact, keeping a single copy of the LB data structures would reduce memory usage in cases you have large numbers of clusters that share a ConnectionPool.
See also https://github.com/envoyproxy/envoy/issues/8702. Depending on how this is implemented it would be nice to also have a configuration/implementation in which we handle upstream connections in a true thread pool that is also shared between workers. This would also allow users to trade-off some additional CPU/contention for lower connection counts and memory usage in certain deployments. Let's chat if someone is going to work on this.
/cc chaoqin-li1123
I think that sharing LB structures between clusters is an explicit goal of this effort. #8702 would provide additional reductions in resource usage, but is no replacement for the changes requested in this issue.
Ideally different clusters would use different connection attributes e.g socket option, tls context. That means even though the endpoint sets are the same, the connection pools should not be shared.
I do agree there are fields vary among the clusters but connection pools can be shared. What is the major pain? LB?
Is this still active? I stumbled apon this as we would love to see this. We have many clusters (200-500) which resolve to the same set of endpoints therefore we open separate connections and connection pools for each cluster. To reduce all kind of ressource usage we would like to reuse the connections to the endpoints for multiple clusters. We only need different load balancing per cluster, as we need the clusters only for different circuit breaking behaviour.
ahh, I think it's still "alive". I saw you asked in a related issue (https://github.com/envoyproxy/envoy/issues/8702#issuecomment-2252536576) as well. :P
This feature would be really nice, but the blockers are always: system complexity, and folks' cycles.
OTOH, when there is no such feature, and you have a real issue in prod. You could possibly dance around Envoy config protos to make your envoy cluster a "MT" cluster: route all the traffic to the same backend group, but differentiate the traffic for previously "different cluster" using some header, path, authority etc.
Hi @stevenzzzz , thanks for your reply. What do you mean with "MT" cluster? Do you have a link or documentation?
nah, just some wild thoughts. there are two dimensions in this story, right?
sometimes we deploy multiple services among a set of endpoints, we define a set of clusters on these machines, each has a pool connecting to the same set of endpoints.
Separating ConnectionPool from Cluster has many advantages: