Closed T-Dnzt closed 5 years ago
Too itchy about this. I'm going to baseline against an out of the box Phoenix server to get another perspective.
A clean phoenix project gives similar results to the networked tests. So I'm crossing out the cross-machine results for their invalidity.
A same-machine load test to /api/admin/token.all
gives the following:
Script | Path | Concurrency | Reqs/concurrency | Reqs/second | Mean response time (ms) |
---|---|---|---|---|---|
token_all same machine | /api/admin/token.all | 1 | 100 | 23.23 | 43.04 |
10 | 10 | 121.17 | 82.53 | ||
10 | 50 | 114.98 | 86.97 | ||
100 | 50 | 896.74 | 111.52 | ||
200 | 50 | 1401.75 | 142.68 | ||
300 | 50 | 975.25 | 307.61 | ||
400 | 50 | 497.15 | 804.59 | ||
400 | 10 | 1948.77 | 205.26 | ||
500 | 10 | 2169.16 | 230.50 | ||
600 | 10 | 2324.55 | 258.11 | ||
700 | 10 | 2442.27 | 286.62 | ||
800 | 10 | 2431.54 | 329.01 | ||
900 | 10 | 2444.97 | 368.10 | ||
1000 | 10 | 2449.01 | 408.00 |
Moving back to todo for a proper environment setup
Latest results using the implementation in #499:
Setup:
Results:
Path | TPS | Max Response Time | Mean | Min |
---|---|---|---|---|
/api/admin/transaction.create | Ping | 0.228 | 0.162 | 0.120 |
1 | 52 | 39 | 29 | |
10 | 40 | 31 | 28 | |
20 | 40 | 30 | 26 | |
30 | 61 | 31 | 28 | |
40 | 2455 | 1532 | 54 | |
50 | Timeout | Timeout | Timeout | |
100 | Timeout | Timeout | Timeout | |
200 | Crashed | Crashed | Crashed |
Quick takeaway: Supports up to 30 TPS with standard configurations (although it can be easily scaled up since it's a typical application server).
Next steps (v1.2):
A basic load test runner is available with #499. Profiling with AppSignal is available with #586.
The optimization will continue in #361.
This is a crude but quick load test done within a few hours with Apache Bench. It's nowhere near accurate but given the setup, I believe it can serve as a minimum baseline:
Setup:
ab
benchmark (= pure brute-force)dev
environment config/api/admin
and/api/admin/token.all
Cross-machine result:Ping latency 65msWas able to do 200 requests concurrently with 170ms response time (including network latency)Throughput 1,000 req/sAfter 200 concurrency, response time & throughput dropped significantly☝️Test infra limitation. A clean phoenix project gets similar results.
Same-machine result:
Other observations:
Summarized data:
(The number of concurrency and reqs/concurrency is ugly, but again this is a quick & dirty load test to get an initial feeling)