envoyproxy / envoy

Cloud-native high-performance edge/middle/service proxy
https://www.envoyproxy.io
Apache License 2.0
25.02k stars 4.82k forks source link

Why "Envoy Front proxy" takes 1000ms vs 5ms #28819

Closed eeliu closed 1 year ago

eeliu commented 1 year ago

Description: When I read https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/front_proxy.html, I found a strange performance-related problem.

Below is my testing proces

  1. clone envoy from https://github.com/envoyproxy/envoy.git
  2. cd envoy/examples/front-proxy
  3. then docker compose up --build -d (as https://www.envoyproxy.io/docs/envoy/latest/start/sandboxes/front_proxy.html shared)
  4. attach into front-proxy-front-envoy container
  5. run ab -n100 -c 10 http://localhost:8080/service/1
    ab -> envoy -> service1
  6. then ab -n100 -c 10 http://service1:8080/service/1 ab -> service1

Step-5 Result

root@e596ebac2783:/# ab -n100 -c 10 http://localhost:8080/service/1
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient).....done

Server Software:        envoy
Server Hostname:        localhost
Server Port:            8080

Document Path:          /service/2
Document Length:        16 bytes

Concurrency Level:      10
Time taken for tests:   10.015 seconds
Complete requests:      100
Failed requests:        0
Non-2xx responses:      100
Total transferred:      16600 bytes
HTML transferred:       1600 bytes
Requests per second:    9.99 [#/sec] (mean)
Time per request:       1001.462 [ms] (mean)
Time per request:       100.146 [ms] (mean, across all concurrent requests)
Transfer rate:          1.62 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       2
Processing:   997 1001   1.5   1001    1003
Waiting:        0    0   0.3      0       1
Total:        998 1001   1.5   1002    1004

Percentage of the requests served within a certain time (ms)
  50%   1002
  66%   1002
  75%   1002
  80%   1002
  90%   1003
  95%   1004
  98%   1004
  99%   1004
 100%   1004 (longest request)

Step-6 Result

root@13cc44a7008a:/# ab -n100 -c 10 http://service1:8080/service/1
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking service1 (be patient).....done

Server Software:        Python/3.11
Server Hostname:        service1
Server Port:            8080

Document Path:          /service/1
Document Length:        79 bytes

Concurrency Level:      10
Time taken for tests:   0.058 seconds
Complete requests:      100
Failed requests:        0
Total transferred:      23100 bytes
HTML transferred:       7900 bytes
Requests per second:    1729.57 [#/sec] (mean)
Time per request:       5.782 [ms] (mean)
Time per request:       0.578 [ms] (mean, across all concurrent requests)
Transfer rate:          390.17 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       1
Processing:     2    5   0.8      5       7
Waiting:        1    4   1.0      4       6
Total:          2    5   0.8      6       7
WARNING: The median and mean for the total time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%      6
  66%      6
  75%      6
  80%      6
  90%      7
  95%      7
  98%      7
  99%      7
 100%      7 (longest request)

Questions

  1. It would be slow under a proxy server, but why so slow (1000ms vs 5ms) ?

  2. Did I miss something ?

Best regards !

eeliu commented 1 year ago

envoy.yaml from https://github.com/envoyproxy/envoy/blob/main/examples/front-proxy/envoy.yaml


static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/service/1"
route:
cluster: service1
- match:
prefix: "/service/2"
route:
cluster: service2
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
ningyougang commented 1 year ago

It's a meaningful test. There has any tuning points for above configuration?

eeliu commented 1 year ago

fixed by https://github.com/envoyproxy/envoy/issues/18608#issuecomment-944476023

       "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
       http_protocol_options:
           accept_http_10: true