Open omid opened 4 years ago
Well, it's really hard to say which one is really better :)
But for sure Vegeta can produce way more than what I configured already via -rate
.
The point is that I'm increasing the rate gradually instead of running a stress test against them.
Let's see... locally I couldn't get any good result
I've decided to open this issue again.
I still consist that Vegeta is not suitable! For example, the developer here says "100000 req per second seems a bit too much ..."
I've tested the application I've developed. It can easily handle more than a million requests in 10 seconds.
tail wrk.out
[2020-05-01T16:17:49Z INFO actix_web::middleware::logger] 127.0.0.1:41290 "POST / HTTP/1.1" 200 0 "-" "-" 0.000003
[2020-05-01T16:17:49Z INFO actix_web::middleware::logger] 127.0.0.1:40282 "POST / HTTP/1.1" 200 0 "-" "-" 0.000003
[2020-05-01T16:17:49Z INFO actix_web::middleware::logger] 127.0.0.1:41482 "POST / HTTP/1.1" 200 0 "-" "-" 0.000003
[2020-05-01T16:17:49Z INFO actix_web::middleware::logger] 127.0.0.1:41002 "POST / HTTP/1.1" 200 0 "-" "-" 0.000003
[2020-05-01T16:17:49Z INFO actix_web::middleware::logger] 127.0.0.1:41690 "POST / HTTP/1.1" 200 0 "-" "-" 0.000003
[2020-05-01T16:17:49Z INFO actix_web::middleware::logger] 127.0.0.1:40794 "POST / HTTP/1.1" 200 0 "-" "-" 0.000003
[2020-05-01T16:17:49Z INFO actix_web::middleware::logger] 127.0.0.1:40922 "POST / HTTP/1.1" 200 0 "-" "-" 0.000003
[2020-05-01T16:17:49Z INFO actix_web::middleware::logger] 127.0.0.1:40474 "POST / HTTP/1.1" 200 0 "-" "-" 0.000003
wc -l wrk.out
1783129 wrk.out
If I ask vegeta to send just 100,000 requests, it generates a lot of errors, like:
"errors": [
"Post \"http://127.0.0.1:8080\": dial tcp 0.0.0.0:0->127.0.0.1:8080: bind: address already in use",
"Post \"http://127.0.0.1:8080\": http: server closed idle connection",
"Post \"http://127.0.0.1:8080\": EOF",
"408 Request Timeout",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:52115->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:35168->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:55506->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:54178->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:48347->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:46557->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:58955->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:41787->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:38945->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:41225->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:48207->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:37819->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:42139->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:32999->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:33443->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:39699->127.0.0.1:8080: read: connection reset by peer",
"Post \"http://127.0.0.1:8080\": read tcp 127.0.0.1:52581->127.0.0.1:8080: read: connection reset by peer"
But there are no errors or crashes in the application nor CPU usage!
I think instead of increasing the number of reqs every other day, you can easily use wrk command to benchmark applications to find the maximum (limit) + their 99th percentile.
And on the other hand, if one day you get to 100K reqs/s, you cannot handle that with Vegeta as far as I can see.
When I use vegeta locally, it doesn't use so much CPU (I know about cpu=1 option) and also my webserver doesn't use any CPU.
On the other hand, when I try wrk for example, with 1 thread, both wrk and my webserver, both using some CPU!
I think replacing it with a better tool can leads to different results.
wrk command can be like:
wrk -t1 -c500 -d45s --latency -s script.lua http://localhost:8080
And content of script.lua will be like:
wrk.method = "POST" wrk.body = "8"