Open Geal opened 7 years ago
In gitlab by @divarvel on Mar 9, 2016, 18:05
The proxy server should be near clever cloud to be closer to actual use cases
We could add other websites running on clever cloud (eg play websites we can duplicate)
Gatling could be a better tool for stress testing
In gitlab by @Geal on Nov 5, 2016, 15:22
draft for a testing protocol:
The tests should be performed in 3 setups:
Trying with a direct connection to the backend is useful to know how much latency is introduced by the proxies.
The clients could use wrk2 for performance testing, since the percentile calculation looks correct. httperf and autobench can be useful to get information on the number of errors
Parameters to modify during the tests:
For a specific number of connections, do a serie of tests with increasing requests per second. Do not stop until performance is degrading, connections are refused or errors appear (and keep pushing further).
How to graphically represent such tests? For one number of concurrent connections, draw curves of the latency percentiles depending on the number of requests per second? on a graph detailing RPS depending on nb of concurrent connections, show the quantity of errors or lost connections with a color gradient?
Which kernel settings must be tuned to get good performance?
I started to make the benchmark infrastructure based on docker
with docker-compose
.
The purpose is to make the benchmarks easy to reproduce for any one.
Link to the repository: proxy-benchmarks
+-----------------+
| |
| nginx backend |
+---> | |
| +-----------------+
+------------------+ +-------------------+ |
| | | | |
| wrk2 client +---------| Proxy | | +-----------------+
| | | | | | |
| | | +---+---->| nginx backend 2 |
+------------------+ | | | | |
+-------------------+ | +-----------------+
|
| +-----------------+
| | |
+----->| nginx <n> |
| |
| |
+-----------------+
They serve static json content. They have different endpoints where the response is a json
with various file sizes (between 100kB and 5MB).
Optimisation made for sending file:
The proxy supported for this benchmarks will be:
Sozu
Haproxy
Envoy
Traefik
NOTE: We can add more in the future.
The client will send a large amount of request on the differents nginx endpoints (The response size will change).
We'll use the LUA Jit
to change the endpoint to request dynamically.
Examples:
NOTE: We should take a look at siege ( an http load tester and benchmarking utility).
TODO
TODO
TODO
here's the kind of measurements I'd like to see:
The benchmarks should be easy to reproduce (like, "use this docker-compose file to reproduce"), and allow comparison between different runs (to see if sozu improves, or compare to other proxies)
There's a really good benchmark of nginx vs caddy here:
https://blog.tjll.net/reverse-proxy-hot-dog-eating-contest-caddy-vs-nginx/
Plus the setup is all made public. Maybe piggybacking on this would yield some useful results?
In gitlab by @Geal on Mar 9, 2016, 18:02
Current tests are done on a 512MB droplet from Digital Ocean, with HTTP requests generated by ab or wrk, from a Dedibox. The proxy is forwarding requests to the Clever Cloud frontend, with the hostname "http://wp-perf-test.cleverapps.io" or "http://rust.cleverapps.io".
Issues with this:
Ideally, a benchmark would compare to haproxy, with load balancing on multiple backends, and it would gradually increase the number of concurrent requests, until the proxy starts failing. That benchmark would also record perf info, with sysdig or something else.