envoyproxy / envoy

Cloud-native high-performance edge/middle/service proxy
https://www.envoyproxy.io
Apache License 2.0
24.85k stars 4.77k forks source link

feature request: RoundRobin policy that is global to all workers #2593

Open coder-ad opened 6 years ago

coder-ad commented 6 years ago

Title: RoundRobin policy is not strictly followed

Description: One of requirement of our application is to maintain equal load on all back-end servers. This is to avoid delay in response time as ours in real time application. We are using envoy to load balance request and configured envoy to use round robin load balancing. We observed that Envoy Proxy is not following round robing policy strictly and allocating same backend server for consecutive request several times, causing 15% of incoming request to be delayed or timeout.

We are using gRPC and maxConcurrentStream is set to 1 in backend server code.

Referring below log of tcpdump showing the backend server allocation. We can see same port is allocated in first two and last two request. 01:21:54.671624 IP 172.17.0.2.46967 > 10.21.22.54.8084: Flags [S], seq 946982500, win 29200, options [mss 1460,sackOK,TS val 1862106908 ecr 0,nop,wscale 7], length 0 01:21:54.708240 IP 172.17.0.2.46968 > 10.21.22.54.8084: Flags [S], seq 1295445122, win 29200, options [mss 1460,sackOK,TS val 1862106944 ecr 0,nop,wscale 7], length 0 01:21:55.964614 IP 172.17.0.2.43617 > 10.21.22.54.8083: Flags [S], seq 2631563209, win 29200, options [mss 1460,sackOK,TS val 1862108201 ecr 0,nop,wscale 7], length 0 01:21:57.291685 IP 172.17.0.2.39114 > 10.21.22.54.8081: Flags [S], seq 3719683624, win 29200, options [mss 1460,sackOK,TS val 1862109528 ecr 0,nop,wscale 7], length 0 01:21:58.467050 IP 172.17.0.2.58096 > 10.21.22.54.8082: Flags [S], seq 3453355119, win 29200, options [mss 1460,sackOK,TS val 1862110703 ecr 0,nop,wscale 7], length 0 01:21:59.036410 IP 172.17.0.2.46539 > 10.21.22.54.8085: Flags [S], seq 2718307517, win 29200, options [mss 1460,sackOK,TS val 1862111273 ecr 0,nop,wscale 7], length 0 01:21:59.961480 IP 172.17.0.2.39132 > 10.21.22.54.8081: Flags [S], seq 134575929, win 29200, options [mss 1460,sackOK,TS val 1862112198 ecr 0,nop,wscale 7], length 0 01:22:00.883728 IP 172.17.0.2.43651 > 10.21.22.54.8083: Flags [S], seq 2605516159, win 29200, options [mss 1460,sackOK,TS val 1862113120 ecr 0,nop,wscale 7], length 0 01:22:01.503618 IP 172.17.0.2.58115 > 10.21.22.54.8082: Flags [S], seq 4238162324, win 29200, options [mss 1460,sackOK,TS val 1862113740 ecr 0,nop,wscale 7], length 0 01:22:01.855618 IP 172.17.0.2.49230 > 10.21.22.54.8086: Flags [S], seq 1693824585, win 29200, options [mss 1460,sackOK,TS val 1862114092 ecr 0,nop,wscale 7], length 0 01:22:03.363875 IP 172.17.0.2.49239 > 10.21.22.54.8086: Flags [S], seq 2656329961, win 29200, options [mss 1460,sackOK,TS val 1862115600 ecr 0,nop,wscale 7], length 0 01:22:03.395605 IP 172.17.0.2.46566 > 10.21.22.54.8085: Flags [S], seq 1861063800, win 29200, options [mss 1460,sackOK,TS val 1862115632 ecr 0,nop,wscale 7], length 0 01:22:05.131695 IP 172.17.0.2.47039 > 10.21.22.54.8084: Flags [S], seq 4052379179, win 29200, options [mss 1460,sackOK,TS val 1862117368 ecr 0,nop,wscale 7], length 0 01:22:05.170569 IP 172.17.0.2.47040 > 10.21.22.54.8084: Flags [S], seq 694894180, win 29200, options [mss 1460,sackOK,TS val 1862117407 ecr 0,nop,wscale 7], length 0

There is no special setup needed to reproduce this behavior/bug. Can you please check about how to enforce stricter round robing policy? Alternately need some setting which ensure second connection is not made to occupied backend host if other hosts are available.

alyssawilk commented 6 years ago

Could you perhaps run this with Envoy logs? Keep in mind that Envoy will assign incoming requests to connections in round robin order but if you have Envoy configured to only use one stream per connection, each time it does this selection it will likely have to prefetch a new TCP connection, which means requests will be proxied to backends based on the connection establishment time, and slow backends may end up seeing bursts of batched requests. I wouldn't expect a tcpdump to show strict round robin as it'd be prone to race conditions. I'd strongly suggest you allow more than one request per stream and see if it helps. Also note that round robin order is enforced per worker thread, so if you have many workers, it's expected to not see strict round robin ordering across the entire Envoy instance if that makes sense. I think Envoy logs might make that clear to you as well.

coder-ad commented 6 years ago

In this specific service, we achieve concurrency by running multiple process [say 6 process] per backend host wherein each process serve only 1 request at a time. We do not prefer to allow more than one concurrent stream on same TCP connection and hence we set maxConcurrentStream=1. Reason - without maxConcurrentStream=1, there are higher chances that multiple streams are given to same backend process. When first request is being processed by Process 1, then subsequent request to same backend process will be buffered and hence delayed/timeout (without maxConcurrentStream=1).

We would like all 6 process to be occupied before this kind of buffering to occur. So, we expect roundrobin load balancing policy to ensure all 6 process are serving one request when any process gets second request.

But issue here with envoy roundrobin policy is -- Process 1 are getting second request even when Process 4,5,6 are free. Thereby, causing delay to second request even if we have free process to server it.

I am attaching 2 log files-

  1. debug-log-envoy.txt - containing the envoy proxy debug log
  2. tcp-syn.txt - containing the order of connection establishment from envoy proxy to 6 backend servers running on ports 10001-10006.

debug-log-envoy.txt tcp-syn.txt

Above 2 logs are taken for same run, so timing information can be matched. LB policy is set as Round robin. For tcp-syn.txt, it can be seen that envoy-proxy is selecting same backend host for consecutive streams on multiple times e.g. -->

03:44:00.730150 IP 172.17.0.2.54441 > 10.21.19.74.10003: Flags [S], seq 1546618277, win 29200, options [mss 1460,sackOK,TS val 1957067163 ecr 0,nop,wscale 7], length 0 03:44:01.440205 IP 172.17.0.2.54450 > 10.21.19.74.10003: Flags [S], seq 1519261357, win 29200, options [mss 1460,sackOK,TS val 1957067873 ecr 0,nop,wscale 7], length 0

03:44:13.709466 IP 172.17.0.2.60497 > 10.21.19.74.10005: Flags [S], seq 2141763892, win 29200, options [mss 1460,sackOK,TS val 1957080143 ecr 0,nop,wscale 7], length 0 03:44:15.277573 IP 172.17.0.2.60501 > 10.21.19.74.10005: Flags [S], seq 2752526685, win 29200, options [mss 1460,sackOK,TS val 1957081711 ecr 0,nop,wscale 7], length 0 03:44:16.081523 IP 172.17.0.2.35273 > 10.21.19.74.10004: Flags [S], seq 68912636, win 29200, options [mss 1460,sackOK,TS val 1957082515 ecr 0,nop,wscale 7], length 0 03:44:16.433086 IP 172.17.0.2.35275 > 10.21.19.74.10004: Flags [S], seq 1417231608, win 29200, options [mss 1460,sackOK,TS val 1957082866 ecr 0,nop,wscale 7], length 0

03:44:32.338933 IP 172.17.0.2.60571 > 10.21.19.74.10005: Flags [S], seq 4056853784, win 29200, options [mss 1460,sackOK,TS val 1957098772 ecr 0,nop,wscale 7], length 0 03:44:32.569419 IP 172.17.0.2.60573 > 10.21.19.74.10005: Flags [S], seq 2862227294, win 29200, options [mss 1460,sackOK,TS val 1957099003 ecr 0,nop,wscale 7], length 0

03:44:33.996674 IP 172.17.0.2.35344 > 10.21.19.74.10004: Flags [S], seq 1780740184, win 29200, options [mss 1460,sackOK,TS val 1957100430 ecr 0,nop,wscale 7], length 0 03:44:36.148775 IP 172.17.0.2.35350 > 10.21.19.74.10004: Flags [S], seq 1113628942, win 29200, options [mss 1460,sackOK,TS val 1957102582 ecr 0,nop,wscale 7], length 0 ...and so on.

alyssawilk commented 6 years ago

And one more thing to confirm - you're running Envoy with --concurrency=1 to ensure there's only one worker thread? As said before if you have multiple worker threads you can absolutely not expect anything like the round robin you're hoping for.

coder-ad commented 6 years ago

We did not set --concurrency so default value is used. We will try with --concurrency=1 and check again and update here.

But single worker might give lower performance. So, is there any way that we use single worker only for backend selection of http2 stream and all subsequent processing of that stream happen in multiple workers? This way we could acheive true round robin policy as well as parallel stream processing for better performance.

ggreenway commented 6 years ago

No, there is not currently any way to accomplish that with concurrency > 1.

coder-ad commented 6 years ago

We verified the true round robin allocation of backend servers after setting --concurrency=1 (single worker). Thanks for the suggestion, it worked. If we can achieve such true round robin allocation with multiple workers in future versions of envoy then it will be even more great!!

mattklein123 commented 6 years ago

I went ahead and changed the title of the issue and marked it a feature request. I think it's conceivably possible that with creative use of atomics and TLS we could have a load balancer that does this without sacrificing performance, but will require some thinking.

stale[bot] commented 6 years ago

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

coder-ad commented 6 years ago

Help wanted. Still looking for the resolution of this issue

chadr123 commented 4 years ago

Is it still not possible yet? I have faced same issue. I aware that if set the --concurrency=1, it maybe work but as you know it sacrifice performance. :(