Closed piomin closed 1 year ago
Hi Piotr,
I assume that you've measured this usage with very low volume traffic. Skupper's load balancing is based on backlog-per-outgoing-path at each router and its goal is to reduce and equalize the latency experienced by each client. If you were to significantly increase the traffic load, I believe you would see a more even distribution across the three servers.
As a counterpoint to your observation: Assume you have two servers, one in C2 and one in C3. Further assume that the server in C2 has fewer available resources and processes service traffic more slowly than the other. Round robin balancing would overload C2 and underutilize C3. Skupper, in this case, will send more traffic to C3 under load.
To directly answer your question, Skupper does not offer strict cross-network round-robin. We believe that what we offer is better in real-world situations.
-Ted
On Wed, Aug 2, 2023 at 4:29 PM Piotr Mińkowski @.***> wrote:
Let's say I have three clusters c1, c2, and c3. The c2 is linked to the c1, and c3 is linked also to the c1 cluster. I'm running my app on both the c2 and c3 clusters. I have one pod running in the c2 cluster, and two pods running in the c3 cluster. Currently, the traffic is split equally between the linked clusters, so a single pod in the c3 cluster receives only 1/2 of the traffic received by the single pod in the c2 cluster. Ok, I know I can change the cost of link, and set e.g. 2 for the link between c2 and c1. But what if enable autoscaling on the c3 cluster? I would like to have the option (maybe not a default one) split the traffic equally between all running pods.
— Reply to this email directly, view it on GitHub https://github.com/skupperproject/skupper/issues/1194, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFEKJTLBQ3LEUTQXNVKPBDXTLPE5ANCNFSM6AAAAAA3B6QEUU . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Hi Ted,
Thanks for your response. No, I'm testing it with siege
as shown below (summary 100k requests):
siege -r 100 -c 100 http://localhost:8080/persons/1
To simplify I'm testing on the single cluster in two different namespaces. I have the same app deployed in both these namespaces. In the first namespace (interconnect-1
) a single pod, in the second namespace (interconnect-2
) I have three pods. Both with the same delay of ~50ms, and both apps have the same resource limits and requests.
Here's the network status:
$ skupper network status
Sites:
╰─ [local] 5f9ef97 - interconnect-1
URL: skupper-inter-router-interconnect-1.apps.cluster-8wp6f.8wp6f.sandbox2524.opentlc.com
mode: interior
name: interconnect-1
namespace: interconnect-1
version: 1.4.2-rh-1
╰─ Services:
╰─ name: spring-kotlin-http
address: spring-kotlin-http: 8080
protocol: http
╰─ Targets:
├─ name: sample-spring-kotlin-microservice.interconnect-1
╰─ name: sample-spring-kotlin-microservice.interconnect-2
Results:
interconnect-1.pod: ~5100reqs
interconnect-2.pod: ~1400reqs interconnect-2.pod: ~1600reqs interconnect-2.pod: ~2000reqs
So still 50%-50% between two different services. My point is that it should be 25% to 75%, because there are three pods running in the interconnect-2 namespace while only a single pod is running in the interconnect-1.
I had to expose services in this way:
$ skupper service create spring-kotlin-http 8080 --protocol=http
$ skupper service bind spring-kotlin-http service sample-spring-kotlin-microservice
$ skupper service bind spring-kotlin-http service sample-spring-kotlin-microservice.interconnect-2.svc.cluster.local
That's because I'm using Knative services. It is related to that issue: https://github.com/skupperproject/skupper/issues/1192
Hi Piotr,
The issue is in the way you have the services configured. You can see right in the 'skupper network status' output that there are two targets. Since you are exposing "services" and not "pods," Skupper is effectively outsourcing its load balancing to the services. Skupper only sees two destinations.
If you were exposing deployments, Skupper would monitor the pod selectors and maintain a one-to-one relationship between pods and targets.
I understand that you are trying to use Knative in the mix as well. Unfortunately, for Knative serving to work, you need to squeeze your traffic through a service. This defeats the load balancing capabilities of Skupper by hiding the number of actual targets in the mix.
All of that being said, I still assume that your siege load is not stressing your servers in this case. If the traffic load was such that the interconnect-1 server started backing up, I believe that the interconnect-2 servers would assume more of the load.
-Ted
On Fri, Aug 25, 2023 at 10:38 AM Piotr Mińkowski @.***> wrote:
Hi Ted,
Thanks for your response. No, I'm testing it with siege as shown below (summary 100k requests):
siege -r 100 -c 100 http://localhost:8080/persons/1
To simplify I'm testing on the single cluster in two different namespaces. I have the same app deployed in both these namespaces. In the first namespace (interconnect-1) a single pod, in the second namespace ( interconnect-2) I have three pods. Both with the same delay of ~50ms, and both apps have the same resource limits and requests.
Here's the network status:
$ skupper network status Sites: ╰─ [local] 5f9ef97 - interconnect-1 URL: skupper-inter-router-interconnect-1.apps.cluster-8wp6f.8wp6f.sandbox2524.opentlc.com mode: interior name: interconnect-1 namespace: interconnect-1 version: 1.4.2-rh-1 ╰─ Services: ╰─ name: spring-kotlin-http address: spring-kotlin-http: 8080 protocol: http ╰─ Targets: ├─ name: sample-spring-kotlin-microservice.interconnect-1 ╰─ name: sample-spring-kotlin-microservice.interconnect-2
Results:
interconnect-1.pod: ~5100reqs
interconnect-2.pod: ~1400reqs interconnect-2.pod: ~1600reqs interconnect-2.pod: ~2000reqs
So still 50%-50% between two different services. My point is that it should be 25% to 75%, because there are three pods running in the interconnect-2 namespace while only a single pod is running in the interconnect-1.
I had to expose services in this way:
$ skupper service create spring-kotlin-http 8080 --protocol=http $ skupper service bind spring-kotlin service sample-spring-kotlin-microservice $ skupper service bind spring-kotlin-http service sample-spring-kotlin-microservice.interconnect-2.svc.cluster.local
That's because I'm using Knative services. It is related to that issue:
1192 https://github.com/skupperproject/skupper/issues/1192
— Reply to this email directly, view it on GitHub https://github.com/skupperproject/skupper/issues/1194#issuecomment-1693466652, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFEKJQYJZITBVYNEUCTJA3XXC2DLANCNFSM6AAAAAA3B6QEUU . You are receiving this because you were assigned.Message ID: @.***>
Ok, I understand. I'm also sure that if the interconnect-1 server would slow down in response time, there would be more traffic forwarded to the interconnect-2. But it is not about that. In Knative there is autoscaling based on traffic volume. That's why my original issue (which I was referring here) was to add built-in support for Knative Service. But I guess it is not possible or it could be hard to achieve currently in Skupper.
I believe that KNative is not well suited to managing/scaling resources across multiple clusters. Instrumentation from Skupper could be used to inform the scaling of deployments and also deciding which clusters should be scaled in a multi-cluster setup, but such an arrangement probably wouldn't use KNative.
On Tue, Aug 29, 2023 at 5:41 AM Piotr Mińkowski @.***> wrote:
Ok, I understand. I'm also sure that if the interconnect-1 server would slow down in response time, there would be more traffic forwarded to the interconnect-2. But it is not about that. In Knative there is autoscaling based on traffic volume. That's why my original issue (which I was referring here) was to add built-in support for Knative Service. But I guess it is not possible or it could be hard to achieve currently in Skupper.
— Reply to this email directly, view it on GitHub https://github.com/skupperproject/skupper/issues/1194#issuecomment-1697105698, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFEKJWEYN6NVMNPOMPARUDXXW2LXANCNFSM6AAAAAA3B6QEUU . You are receiving this because you were assigned.Message ID: @.***>
Let's say I have three clusters
c1
,c2
, andc3
. Thec2
is linked to thec1
, andc3
is linked also to thec1
cluster. I'm running my app on both thec2
andc3
clusters. I have one pod running in thec2
cluster, and two pods running in thec3
cluster. I'm exposing that service with Skupper and calling it from another app running on thec1
cluster. Currently, the traffic is split equally between the linked clusters, so a single pod in thec3
cluster receives only 1/2 of the traffic received by the single pod in thec2
cluster. Ok, I know I can change thecost
of the link, and set e.g. 2 for the link betweenc2
andc1
. But what if enable autoscaling on thec3
cluster? I would like to have the option (maybe not a default one) that allows to split the traffic equally between all running pods.