Open dragonorloong opened 5 years ago
So in the tests I ran while working on https://github.com/envoyproxy/nighthawk the difference between Envoy and nginx wasn't close to being as pronounced as the results above. One thing I notice is that the one test uses -t20
while the other one uses -t30
. Is there a reason for that difference?
It may also help to verify that connection-reuse is similar between the two tests.
Having said that, sometimes there's also good reason to sanity check reported numbers. For an example of that involving wrk2, Envoy, and HAProxy, see https://github.com/envoyproxy/envoy/issues/5536#issuecomment-484069712
I ran comparison tests between YAStack based Envoy and standalone Envoy with the direct response set up. Now YAStack based Envoy runs three threads underneath, out of which I found the eal-intr-thread and the ev-source-exe thread to be vying for CPU time. So I separated out these two tasks to two different cores and found standalone Envoy and YAStack Envoy performance to be exactly the same.
I have been using the https://github.com/rakyll/hey tool for my tests.
My fstack config file looks similar to what @dragonorloong has provided.
Envoy Config file
static_resources:
listeners:
- address:
socket_address: { address: 0.0.0.0, port_value: 8000, provider: FP }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains: ["*"]
routes:
# - match:
# prefix: "/service/1"
# route:
# cluster: service1
- match:
#prefix: "/service/2"
prefix: "/"
direct_response:
status: 200
body:
inline_string: <4 KB String>
http_filters:
- name: envoy.router
config: {}
# clusters:
# - name: service1
# connect_timeout: 0.25s
# type: strict_dns
# lb_policy: round_robin
# http2_protocol_options: {}
# hosts:
# - socket_address:
# address: service1
# #address: 172.31.9.84
# port_value: 8000
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
provider: HOST
cc - @dragonorloong @oschaaf @ratnadeepb
My initial tests only did a comparison between vanilla envoy v/s yastack. That had encouraging numbers.
I used wrk for my tests and was using a single-threaded version of yastack. I was interested in per-core throughput, rps, ssl-rps, ssl-throughput etc.
One thing I did notice is that nginx's event collection does not have indirections like libevent. The indirections in libevent have a small cost associated with it. But the benefit is that any other network processing code can integrate with dpdk infused libevent.
One more test I ran was libevent-on-dpdk (without envoy) and those numbers also looked good.
I am a little too held up with something else right now, but plan to revisit this in sometime.
I don't know if there is a problem with my configuration. The performance of yastack is much worse than nginx
Traffic Path:
wrk -> envoy(f-stack) -> nginx wrk -> nginx(linux kernel) -> nginx
Modify code, always use f-stack socket
envoy config file:
f-stack config file:
nginx use kernel network stack, config file:
test result:
envoy