Closed waghanza closed 4 years ago
I agree.
@ioquatix Some result using roda
Average | 50th percentile | 90th percentile | 99th percentile | 99.9th percentile | Standard deviation | |
---|---|---|---|---|---|---|
falcon (0.30.0) | 68.20 ms | 36.83 ms | 58.73 ms | 1200.04 ms | 2599.13 ms | 206621.00 |
puma (3.12.1) | 1.56 ms | 0.13 ms | 1.03 ms | 31.12 ms | 129.73 ms | 5955.33 |
BTW, I think we SHOULD NOT replace puma
. puma
is used as a de-factor
standard.
BUT what we SHOULD do is to display 2 variants for each ruby
frameworks :
puma
(sync
)falcon
(async
)This is quite similar in php
world. The de-facto
standard is apache/php-fpm
or nginx/php-fpm
, but swoole and php pm are some big challenger (async).
What do you think ?
Something looks very wrong with those results, can you tell me how to reproduce it locally?
Testing multiple combinations is logical.
shards build
bin/neph roda
bin/benchmarker roda
However, I think the gap is about docker
Some raw results (using wrk --latency http://0.0.0.0:3000
on i5
4xCPU / 8 G
)
$ wrk --latency http://0.0.0.0:3000
Running 10s test @ http://0.0.0.0:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 491.12us 1.33ms 36.76ms 96.11%
Req/Sec 17.83k 2.21k 22.25k 65.50%
Latency Distribution
50% 234.00us
75% 401.00us
90% 745.00us
99% 5.64ms
355002 requests in 10.01s, 21.33MB read
Requests/sec: 35475.56
Transfer/sec: 2.13MB
$ wrk --latency http://0.0.0.0:3000
Running 10s test @ http://0.0.0.0:3000
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 436.26us 0.91ms 23.55ms 95.58%
Req/Sec 16.46k 2.64k 33.11k 72.14%
Latency Distribution
50% 288.00us
75% 360.00us
90% 508.00us
99% 4.03ms
329077 requests in 10.10s, 18.83MB read
Requests/sec: 32592.25
Transfer/sec: 1.86MB
I'm convinced that falcon
is very useful but I'm quite unsatisfied if it is taken as a replacement
I'll have to imagine how to make variants :stuck_out_tongue:
In the data you give above, falcon has lower overall latency, yet in the results from roda
the latency is atrocious - what's going on?
Do not know, the code is the same. BUT those results are from a workstation
machine, not an isolated server
, probably a really big gap :stuck_out_tongue:
If falcon
performance is worse than puma
by more than a few percent, either there is a serious bug, or your benchmark is wrong. Please tell me if you find such a situation. Please let me know when you collect data running from same system 👍
@ioquatix I have just seen that I have a Broken pipe.
Perhaps, you could help :stuck_out_tongue:
cd
into ruby/roda
puma
with falcon
in Gemfile
bundle install && bundle exec falcon serve --bind http://0.0.0.0:3000 --reuse-port --count $(nproc)
Don't use --reuse-port
. I will try it out some time this week.
same Errno::EPIPE: Broken pipe
, how can I help ?
Run with falcon --verbose
and give me some log output.
is there any log file I can upload to any https://gist.github.com/ or else ?
It logs everything to console, but if there is no obvious issue, please try prepending CONSOLE_LOG_LEVEL=debug falcon ...
. It will generate a lot of data.
I'm on ruby 2.6.2p47
, and I have
11.44s: <Async::Task:0x2b0d8c1fe3c4 GET / from #<Addrinfo: 127.0.0.1:50588 TCP> failed>
| Errno::EPIPE: Broken pipe
| → /usr/share/ruby/socket.rb:456 in `__write_nonblock'
| /usr/share/ruby/socket.rb:456 in `write_nonblock'
| /usr/share/gems/gems/async-io-1.23.1/lib/async/io/generic.rb:208 in `async_send'
| /usr/share/gems/gems/async-io-1.23.1/lib/async/io/generic.rb:54 in `block in wrap_blocking_method'
| /usr/share/gems/gems/async-io-1.23.1/lib/async/io/generic.rb:148 in `write'
| /usr/share/gems/gems/async-io-1.23.1/lib/async/io/stream.rb:155 in `flush'
| /usr/share/gems/gems/protocol-http1-0.5.0/lib/protocol/http1/connection.rb:230 in `write_empty_body'
| /usr/share/gems/gems/protocol-http1-0.5.0/lib/protocol/http1/connection.rb:308 in `write_body'
| /usr/share/gems/gems/protocol-http1-0.5.0/lib/protocol/http1/connection.rb:135 in `write_response'
| /usr/share/gems/gems/async-http-0.40.3/lib/async/http/protocol/http1/server.rb:30 in `fail_request'
| /usr/share/gems/gems/async-http-0.40.3/lib/async/http/protocol/http1/server.rb:49 in `rescue in next_request'
| /usr/share/gems/gems/async-http-0.40.3/lib/async/http/protocol/http1/server.rb:33 in `next_request'
| /usr/share/gems/gems/async-http-0.40.3/lib/async/http/protocol/http1/server.rb:55 in `each'
| /usr/share/gems/gems/async-http-0.40.3/lib/async/http/server.rb:50 in `accept'
| /usr/share/gems/gems/async-io-1.23.1/lib/async/io/socket.rb:99 in `block in accept_each'
| /usr/share/gems/gems/async-io-1.23.1/lib/async/io/socket.rb:137 in `block in accept'
| /usr/share/gems/gems/async-1.17.1/lib/async/task.rb:204 in `block in make_fiber'
| Caused by Errno::ECONNRESET: Connection reset by peer
| → /usr/share/ruby/socket.rb:452 in `__read_nonblock'
| /usr/share/ruby/socket.rb:452 in `read_nonblock'
| /usr/share/gems/gems/async-io-1.23.1/lib/async/io/generic.rb:208 in `async_send'
| /usr/share/gems/gems/async-io-1.23.1/lib/async/io/generic.rb:61 in `block in wrap_blocking_method'
| /usr/share/gems/gems/async-io-1.23.1/lib/async/io/stream.rb:224 in `fill_read_buffer'
| /usr/share/gems/gems/async-io-1.23.1/lib/async/io/stream.rb:110 in `read_until'
Everything seems to be ok, until a certain point
Everything seems to be ok, until a certain point.
I feel like this is a metaphor for life.
Closing in favor of https://github.com/the-benchmarker/web-frameworks/issues/1481
Hi,
puma
is well-known inruby
World.However,
puma
is synchronous. It could be interesting to compare alsoasync
(with EventMachine).It is accurate to check performances on :
The more performant implementation SHOULD be used.
Regards,