the-benchmarker / web-frameworks

Which is the fastest web framework?
MIT License
6.91k stars 641 forks source link

Make Ruby use all CPU cores (via Puma workers) #193

Closed AlexWayfer closed 6 years ago

AlexWayfer commented 6 years ago

Related to #69

It starts many workers for each framework, I tested.

/cc @OvermindDL1

OvermindDL1 commented 6 years ago

Whooo this is SO much nicer! Merge it I say! :-D

╰─➤ tools/stats.exs -w 1 -d 3 _ Total Cores: 16 Concurrent Connections: 1000 Threads: 10 Warmup: 1 seconds Duration: 3 seconds

Processing servers:

Processing: bin/server_cpp_evhtp Processing: bin/server_crystal_router_cr Processing: bin/server_go_fasthttprouter Processing: bin/server_nim_mofuw Processing: bin/server_python_flask.py Processing: bin/server_python_japronto Processing: bin/server_python_sanic Processing: bin/server_python_tornado Processing: bin/server_ruby_rack-routing Processing: bin/server_ruby_rails Processing: bin/server_ruby_roda Processing: bin/server_ruby_sinatra Processing: bin/server_rust_iron Processing: bin/server_rust_nickel

Path URL Errors Total Requests Count Total Requests/s Total Requests Throughput Total Throughput/s Req/s Avg Req/s Stdev Req/s Max Req/s +/- Latency Avg Latency Stdev Latency Max Latency +/- 50% 75% 90% 99%
bin/server_cpp_evhtp http://127.0.0.1:3000/ 0 1915204 619416.03 116.89MB 37.81MB 63.87k 17.23k 109.11k 76.25% 2.48ms 7.04ms 217.21ms 95.52% 680.00us 2.77ms 6.45ms 15.48ms
bin/server_cpp_evhtp http://127.0.0.1:3000/user 0 1791024 586290.8 109.32MB 35.78MB 59.60k 15.77k 110.54k 76.33% 2.11ms 2.70ms 32.31ms 87.17% 742.00us 2.83ms 5.61ms 12.58ms
bin/server_cpp_evhtp http://127.0.0.1:3000/user/0 0 1682113 551716.5 150.79MB 49.46MB 56.08k 12.34k 95.52k 76.00% 2.48ms 7.42ms 213.67ms 97.18% 792.00us 2.94ms 5.78ms 13.47ms
bin/server_crystal_router_cr http://127.0.0.1:3000/ 0 185364 60984.17 10.96MB 3.61MB 6.21k 586.13 7.34k 77.00% 16.05ms 4.33ms 220.70ms 88.98% 13.47ms 18.79ms 20.83ms 27.40ms
bin/server_crystal_router_cr http://127.0.0.1:3000/user 0 168583 55498.11 9.97MB 3.28MB 5.65k 393.75 6.37k 79.33% 17.63ms 4.24ms 227.27ms 86.35% 17.01ms 20.77ms 21.51ms 28.45ms
bin/server_crystal_router_cr http://127.0.0.1:3000/user/0 0 179364 59056.97 15.74MB 5.18MB 6.01k 725.26 7.01k 85.67% 15.98ms 4.64ms 219.83ms 82.83% 14.06ms 18.13ms 21.11ms 28.18ms
bin/server_go_fasthttprouter http://127.0.0.1:3000/ 0 1403437 459125.07 124.47MB 40.72MB 46.68k 8.09k 76.29k 73.33% 2.07ms 3.83ms 206.54ms 95.54% 1.42ms 2.37ms 4.13ms 8.98ms
bin/server_go_fasthttprouter http://127.0.0.1:3000/user 0 1341162 437678.77 118.95MB 38.82MB 44.69k 7.88k 84.72k 75.33% 2.16ms 4.39ms 205.75ms 96.92% 1.54ms 2.42ms 4.17ms 9.06ms
bin/server_go_fasthttprouter http://127.0.0.1:3000/user/0 0 1279873 413075.99 200.18MB 64.61MB 42.59k 7.64k 70.56k 74.00% 2.14ms 1.96ms 33.10ms 87.72% 1.55ms 2.58ms 4.46ms 9.82ms
bin/server_nim_mofuw http://127.0.0.1:3000/ 0 1502354 485667.61 196.29MB 63.45MB 49.98k 15.97k 93.76k 71.33% 2.58ms 3.15ms 33.82ms 85.73% 0.93ms 3.74ms 6.96ms 13.88ms
bin/server_nim_mofuw http://127.0.0.1:3000/user 0 1475906 477344.89 192.83MB 62.37MB 49.01k 14.09k 88.18k 74.67% 2.61ms 3.17ms 29.60ms 85.70% 0.96ms 3.81ms 7.00ms 13.98ms
bin/server_nim_mofuw http://127.0.0.1:3000/user/0 0 1360850 439085.71 216.73MB 69.93MB 45.40k 12.42k 89.37k 77.52% 2.71ms 3.22ms 34.54ms 85.80% 1.08ms 3.94ms 7.10ms 14.25ms
bin/server_python_flask.py http://127.0.0.1:3000/ 0 4871 1576.45 727.80KB 235.54KB 242.30 209.90 0.87k 64.43% 82.20ms 151.24ms 1.73s 94.96% 46.62ms 56.23ms 70.87ms 660.80ms
bin/server_python_flask.py http://127.0.0.1:3000/user 0 5757 1877.18 860.18KB 280.48KB 240.38 262.58 0.89k 79.22% 80.89ms 166.32ms 1.74s 95.80% 50.03ms 52.24ms 66.29ms 900.42ms
bin/server_python_flask.py http://127.0.0.1:3000/user/0 0 0 0.0 0.00B 0.00B 0.00 0.00 0.00 -nan% 0.00us 0.00us 0.00us -nan% 0.00us 0.00us 0.00us 0.00us
bin/server_python_japronto http://127.0.0.1:3000/ 0 301607 99476.34 22.72MB 7.49MB 10.10k 548.99 12.30k 81.00% 9.51ms 3.56ms 234.96ms 99.65% 9.68ms 10.17ms 10.30ms 10.44ms
bin/server_python_japronto http://127.0.0.1:3000/user 0 296722 98061.23 22.36MB 7.39MB 9.94k 471.11 10.93k 61.00% 9.94ms 1.98ms 227.87ms 99.68% 9.83ms 10.47ms 10.58ms 10.96ms
bin/server_python_japronto http://127.0.0.1:3000/user/0 0 272252 89132.9 28.30MB 9.27MB 9.12k 1.07k 12.69k 74.00% 11.64ms 20.19ms 444.26ms 98.90% 10.70ms 11.35ms 11.56ms 56.40ms
bin/server_python_sanic http://127.0.0.1:3000/ 0 584984 189886.28 66.39MB 21.55MB 19.50k 5.09k 33.83k 70.23% 6.10ms 6.44ms 83.72ms 89.29% 3.93ms 7.62ms 13.02ms 32.96ms
bin/server_python_sanic http://127.0.0.1:3000/user 0 536750 173398.43 60.91MB 19.68MB 17.95k 5.13k 32.37k 71.24% 6.49ms 6.06ms 65.50ms 85.41% 4.64ms 8.60ms 14.33ms 29.16ms
bin/server_python_sanic http://127.0.0.1:3000/user/0 0 549114 177289.28 78.03MB 25.19MB 18.39k 5.55k 33.38k 65.22% 6.31ms 6.27ms 94.57ms 88.74% 4.83ms 8.46ms 13.34ms 30.69ms
bin/server_python_tornado http://127.0.0.1:3000/ 0 6604 2178.92 1.22MB 412.80KB 226.46 144.56 818.00 77.39% 268.63ms 96.20ms 693.13ms 74.33% 294.89ms 312.66ms 375.56ms 457.21ms
bin/server_python_tornado http://127.0.0.1:3000/user 0 6393 2062.26 0.88MB 290.01KB 346.89 210.48 1.04k 71.27% 138.55ms 57.58ms 1.02s 68.23% 170.47ms 171.06ms 183.63ms 279.86ms
bin/server_python_tornado http://127.0.0.1:3000/user/0 0 5759 1903.42 1.23MB 416.37KB 243.26 163.92 0.86k 73.13% 134.72ms 49.81ms 433.38ms 74.11% 155.70ms 156.40ms 165.30ms 248.39ms
bin/server_ruby_rack-routing http://127.0.0.1:3000/ 0 229570 74320.88 8.32MB 2.69MB 9.57k 9.74k 42.34k 84.58% 2.73ms 2.94ms 86.28ms 89.80% 1.94ms 3.53ms 5.72ms 13.63ms
bin/server_ruby_rack-routing http://127.0.0.1:3000/user 0 193156 62376.7 7.00MB 2.26MB 8.15k 7.71k 33.42k 75.95% 3.06ms 2.84ms 45.90ms 83.67% 2.30ms 4.13ms 6.53ms 13.22ms
bin/server_ruby_rack-routing http://127.0.0.1:3000/user/0 0 186350 60160.99 12.08MB 3.90MB 6.90k 6.43k 32.16k 85.19% 3.33ms 3.16ms 55.87ms 86.99% 2.46ms 4.38ms 7.05ms 15.29ms
bin/server_ruby_rails http://127.0.0.1:3000/ 0 238386 77113.43 8.64MB 2.79MB 8.84k 11.97k 46.53k 78.52% 2.57ms 2.72ms 84.11ms 89.55% 1.84ms 3.33ms 5.39ms 12.54ms
bin/server_ruby_rails http://127.0.0.1:3000/user 0 202808 65467.04 7.35MB 2.37MB 9.89k 9.54k 45.58k 80.49% 2.92ms 2.80ms 45.40ms 86.90% 2.14ms 3.96ms 6.36ms 12.94ms
bin/server_ruby_rails http://127.0.0.1:3000/user/0 0 183582 59314.25 11.91MB 3.85MB 6.82k 4.94k 27.51k 70.63% 3.19ms 2.88ms 50.54ms 83.82% 2.42ms 4.23ms 6.63ms 13.48ms
bin/server_ruby_roda http://127.0.0.1:3000/ 0 240758 78043.62 8.72MB 2.83MB 10.05k 10.84k 43.80k 80.83% 2.31ms 2.10ms 35.99ms 83.01% 1.76ms 3.13ms 4.84ms 9.82ms
bin/server_ruby_roda http://127.0.0.1:3000/user 0 200845 65161.79 7.28MB 2.36MB 7.44k 6.06k 25.84k 66.30% 2.96ms 2.84ms 66.46ms 87.98% 2.18ms 3.94ms 6.22ms 13.45ms
bin/server_ruby_roda http://127.0.0.1:3000/user/0 0 188592 60946.93 12.23MB 3.95MB 9.01k 7.94k 38.79k 69.86% 3.10ms 2.87ms 55.54ms 85.19% 2.32ms 4.07ms 6.50ms 13.66ms
bin/server_ruby_sinatra http://127.0.0.1:3000/ 0 226440 73303.85 8.21MB 2.66MB 15.62k 10.04k 45.15k 67.59% 2.47ms 2.38ms 40.28ms 88.23% 1.77ms 3.22ms 5.21ms 11.42ms
bin/server_ruby_sinatra http://127.0.0.1:3000/user 0 207229 66864.33 7.51MB 2.42MB 11.57k 12.93k 60.34k 83.24% 2.95ms 2.94ms 50.76ms 87.95% 2.07ms 3.89ms 6.43ms 14.21ms
bin/server_ruby_sinatra http://127.0.0.1:3000/user/0 0 184330 59519.45 11.95MB 3.86MB 10.58k 7.72k 33.76k 67.82% 3.11ms 2.87ms 57.76ms 85.51% 2.29ms 4.09ms 6.60ms 13.84ms
bin/server_rust_iron http://127.0.0.1:3000/ 0 222956 73721.3 8.08MB 2.67MB 12.82k 8.99k 45.89k 65.52% 2.69ms 2.64ms 57.00ms 88.30% 1.92ms 3.53ms 5.72ms 12.66ms
bin/server_rust_iron http://127.0.0.1:3000/user 0 204999 66175.99 7.43MB 2.40MB 11.54k 12.02k 47.60k 76.97% 3.03ms 2.90ms 49.78ms 87.03% 2.21ms 4.05ms 6.53ms 13.68ms
bin/server_rust_iron http://127.0.0.1:3000/user/0 0 190490 61721.53 12.35MB 4.00MB 8.17k 10.28k 49.77k 83.69% 3.14ms 3.01ms 50.35ms 87.70% 2.28ms 4.17ms 6.67ms 14.24ms
bin/server_rust_nickel http://127.0.0.1:3000/ 0 225850 74729.07 8.18MB 2.71MB 11.14k 10.78k 45.28k 80.79% 2.63ms 2.68ms 50.30ms 88.44% 1.84ms 3.46ms 5.70ms 12.97ms
bin/server_rust_nickel http://127.0.0.1:3000/user 0 195741 64791.45 7.09MB 2.35MB 13.52k 10.28k 40.66k 61.38% 3.03ms 2.84ms 53.71ms 86.54% 2.25ms 4.01ms 6.35ms 13.84ms
bin/server_rust_nickel http://127.0.0.1:3000/user/0 0 193269 62452.38 12.53MB 4.05MB 7.20k 8.98k 38.32k 82.16% 3.14ms 3.10ms 74.28ms 88.13% 2.25ms 4.16ms 6.72ms 14.78ms

Rankings

Ranking by Average Requests per second:

  1. 585807 req/sec : bin/server_cpp_evhtp
  2. 467366 req/sec : bin/server_nim_mofuw
  3. 436626 req/sec : bin/server_go_fasthttprouter
  4. 180191 req/sec : bin/server_python_sanic
  5. 95556 req/sec : bin/server_python_japronto
  6. 68050 req/sec : bin/server_ruby_roda
  7. 67324 req/sec : bin/server_rust_nickel
  8. 67298 req/sec : bin/server_ruby_rails
  9. 67206 req/sec : bin/server_rust_iron
  10. 66562 req/sec : bin/server_ruby_sinatra
  11. 65619 req/sec : bin/server_ruby_rack-routing
  12. 58513 req/sec : bin/server_crystal_router_cr
  13. 2048 req/sec : bin/server_python_tornado
  14. 1151 req/sec : bin/server_python_flask.py
waghanza commented 6 years ago

Hi @AlexWayfer,

Thanks for your PR.

I have a question however, why using the number of cores to determine worker numbers ?

I have always used about 2 workers on each cores.

`nproc`*2+1 | bc -l

Regards,

AlexWayfer commented 6 years ago

I have a question however, why using the number of cores to determine worker numbers ?

There is the source: https://github.com/tbrand/which_is_the_fastest/issues/190#issuecomment-383621808

I have always used about 2 workers on each cores.

Any argumentation of this?

OvermindDL1 commented 6 years ago

At least in the C++/Nim/Go/Rust tests I did going higher than core count slowed them down, but if the Ruby implementations are waiting on sockets often then it might help to increase them, however if they are mostly not waiting then it won't help.

Let me test this, hold on...

OvermindDL1 commented 6 years ago

Puma isn't dying properly! Ack! Above records might be inaccurate... Why can't ruby-stuff follow proper unix process styles... >.<

Brutally killing all puma on wrk stop now, blehg...

Either way on a 16-core system:

1.  5626 req/sec : bin/server_ruby_roda
2.  11740 req/sec : bin/server_ruby_roda
4.  34670 req/sec : bin/server_ruby_roda
8.  58687 req/sec : bin/server_ruby_roda
16. 84807 req/sec : bin/server_ruby_roda
24. 83468 req/sec : bin/server_ruby_roda
32. 79125 req/sec : bin/server_ruby_roda

So yeah, going higher than the core count gets slower as expected on anything that is waiting on code instead of events.

I really wish all of the servers took an argument to set their thread/worker count so they could be properly tested from the benchmarker too... >.>

OvermindDL1 commented 6 years ago

And here's the results with puma being brutally killed (apparently when started as a 'cluster' it's processes are named different...):

╰─➤ tools/stats.exs -w 1 -d 3 _
Total Cores: 16 Concurrent Connections: 1000 Threads: 10 Warmup: 1 seconds Duration: 3 seconds

Processing servers:

Processing: bin/server_cpp_evhtp Processing: bin/server_crystal_router_cr Processing: bin/server_go_fasthttprouter Processing: bin/server_nim_mofuw Processing: bin/server_python_flask.py Processing: bin/server_python_japronto Processing: bin/server_python_sanic Processing: bin/server_python_tornado Processing: bin/server_ruby_rack-routing Processing: bin/server_ruby_rails Processing: bin/server_ruby_roda Processing: bin/server_ruby_sinatra Processing: bin/server_rust_iron Processing: bin/server_rust_nickel

Path URL Errors Total Requests Count Total Requests/s Total Requests Throughput Total Throughput/s Req/s Avg Req/s Stdev Req/s Max Req/s +/- Latency Avg Latency Stdev Latency Max Latency +/- 50% 75% 90% 99%
bin/server_cpp_evhtp http://127.0.0.1:3000/ 0 1909312 616236.29 116.54MB 37.61MB 63.58k 19.04k 111.46k 70.00% 2.39ms 6.81ms 205.60ms 95.51% 634.00us 2.66ms 6.21ms 15.55ms
bin/server_cpp_evhtp http://127.0.0.1:3000/user 0 1814721 592491.54 110.76MB 36.16MB 60.52k 17.51k 115.46k 76.33% 2.21ms 5.67ms 216.78ms 95.00% 667.00us 2.71ms 5.60ms 12.98ms
bin/server_cpp_evhtp http://127.0.0.1:3000/user/0 0 1693548 547838.87 151.82MB 49.11MB 56.27k 13.12k 110.19k 76.00% 2.19ms 2.73ms 30.90ms 86.31% 830.00us 3.05ms 5.84ms 12.52ms
bin/server_crystal_router_cr http://127.0.0.1:3000/ 0 195334 64148.25 11.55MB 3.79MB 6.54k 412.47 7.08k 67.67% 15.32ms 3.41ms 26.98ms 68.72% 13.18ms 18.80ms 19.53ms 24.88ms
bin/server_crystal_router_cr http://127.0.0.1:3000/user 0 178486 58737.39 10.55MB 3.47MB 5.98k 672.29 7.40k 76.33% 16.24ms 4.06ms 226.20ms 70.40% 14.11ms 20.40ms 21.42ms 24.58ms
bin/server_crystal_router_cr http://127.0.0.1:3000/user/0 0 183415 60617.77 16.09MB 5.32MB 6.14k 560.91 7.27k 73.00% 16.01ms 3.43ms 34.69ms 75.19% 14.38ms 17.68ms 21.49ms 25.15ms
bin/server_go_fasthttprouter http://127.0.0.1:3000/ 0 1389000 449029.2 123.19MB 39.83MB 46.28k 8.21k 79.80k 74.67% 2.58ms 9.25ms 209.95ms 99.19% 1.47ms 2.38ms 4.08ms 10.47ms
bin/server_go_fasthttprouter http://127.0.0.1:3000/user 0 1336019 434944.63 118.49MB 38.58MB 44.31k 6.86k 71.04k 71.67% 2.13ms 2.10ms 201.75ms 88.90% 1.52ms 2.56ms 4.45ms 9.51ms
bin/server_go_fasthttprouter http://127.0.0.1:3000/user/0 0 1270714 411307.51 198.74MB 64.33MB 42.20k 7.95k 73.96k 74.00% 2.22ms 2.09ms 41.40ms 87.92% 1.57ms 2.68ms 4.68ms 10.12ms
bin/server_nim_mofuw http://127.0.0.1:3000/ 0 1494246 484508.18 195.23MB 63.30MB 49.67k 15.24k 88.75k 73.33% 2.65ms 3.25ms 36.11ms 86.00% 1.00ms 3.78ms 7.07ms 14.48ms
bin/server_nim_mofuw http://127.0.0.1:3000/user 0 1471192 478190.87 192.22MB 62.48MB 48.93k 14.69k 87.18k 72.33% 2.62ms 3.15ms 32.48ms 85.96% 1.02ms 3.78ms 6.91ms 13.99ms
bin/server_nim_mofuw http://127.0.0.1:3000/user/0 0 1357806 439666.57 216.25MB 70.02MB 45.06k 11.95k 86.49k 71.00% 2.82ms 3.35ms 33.56ms 85.76% 1.16ms 4.07ms 7.45ms 14.76ms
bin/server_python_flask.py http://127.0.0.1:3000/ 0 5728 1848.29 855.84KB 276.16KB 282.81 213.17 617.00 46.43% 55.79ms 78.75ms 596.62ms 95.76% 43.67ms 45.96ms 48.66ms 589.06ms
bin/server_python_flask.py http://127.0.0.1:3000/user 0 0 0.0 0.00B 0.00B 0.00 0.00 0.00 -nan% 0.00us 0.00us 0.00us -nan% 0.00us 0.00us 0.00us 0.00us
bin/server_python_flask.py http://127.0.0.1:3000/user/0 0 1479 478.71 262.87KB 85.08KB 223.83 124.20 442.00 66.15% 55.20ms 31.28ms 258.64ms 91.41% 48.56ms 50.55ms 61.94ms 252.17ms
bin/server_python_japronto http://127.0.0.1:3000/ 0 315750 103108.51 23.79MB 7.77MB 10.58k 713.46 12.53k 79.67% 9.18ms 4.21ms 231.02ms 99.82% 9.48ms 9.56ms 9.64ms 9.87ms
bin/server_python_japronto http://127.0.0.1:3000/user 0 291262 96321.96 21.94MB 7.26MB 9.76k 521.94 11.51k 76.33% 9.98ms 1.72ms 225.20ms 99.66% 10.05ms 10.63ms 10.77ms 10.97ms
bin/server_python_japronto http://127.0.0.1:3000/user/0 0 274982 90100.75 28.32MB 9.28MB 9.22k 454.19 10.73k 81.00% 10.65ms 7.75ms 232.14ms 99.59% 10.85ms 11.00ms 11.08ms 11.83ms
bin/server_python_sanic http://127.0.0.1:3000/ 0 562644 182263.39 63.85MB 20.68MB 18.80k 5.67k 44.48k 67.33% 6.15ms 5.71ms 74.42ms 87.32% 4.50ms 7.40ms 13.19ms 26.74ms
bin/server_python_sanic http://127.0.0.1:3000/user 0 507506 164665.77 57.60MB 18.69MB 16.98k 2.72k 26.16k 66.00% 6.36ms 4.75ms 80.95ms 87.09% 5.58ms 7.88ms 11.59ms 24.27ms
bin/server_python_sanic http://127.0.0.1:3000/user/0 0 527322 171198.38 74.93MB 24.33MB 17.63k 2.70k 31.38k 87.00% 5.91ms 3.76ms 57.19ms 79.57% 4.80ms 7.96ms 9.97ms 19.71ms
bin/server_python_tornado http://127.0.0.1:3000/ 0 6791 2227.02 1.26MB 421.92KB 228.21 137.80 630.00 72.39% 251.25ms 120.69ms 1.38s 81.71% 268.89ms 284.12ms 327.88ms 448.85ms
bin/server_python_tornado http://127.0.0.1:3000/user 0 6191 2058.02 0.85MB 289.41KB 282.37 182.41 767.00 69.30% 174.12ms 91.74ms 607.86ms 68.58% 215.51ms 215.99ms 228.37ms 573.13ms
bin/server_python_tornado http://127.0.0.1:3000/user/0 0 6044 1980.14 1.29MB 433.16KB 229.01 140.65 550.00 59.84% 179.26ms 87.57ms 556.22ms 69.57% 219.17ms 221.44ms 238.12ms 520.11ms
bin/server_ruby_rack-routing http://127.0.0.1:3000/ 0 228522 74005.61 8.28MB 2.68MB 8.47k 9.92k 47.88k 83.33% 2.74ms 2.93ms 66.03ms 88.86% 1.92ms 3.58ms 5.95ms 13.97ms
bin/server_ruby_rack-routing http://127.0.0.1:3000/user 0 194492 63295.58 7.05MB 2.29MB 7.19k 4.43k 20.70k 57.78% 3.02ms 2.74ms 72.19ms 84.92% 2.34ms 4.00ms 6.15ms 12.66ms
bin/server_ruby_rack-routing http://127.0.0.1:3000/user/0 0 180601 59652.39 11.71MB 3.87MB 6.22k 6.23k 30.52k 84.83% 3.14ms 2.94ms 74.77ms 86.95% 2.36ms 4.19ms 6.46ms 13.76ms
bin/server_ruby_rails http://127.0.0.1:3000/ 0 16793 5421.21 2.80MB 0.90MB 561.43 644.22 3.23k 83.67% 45.41ms 19.35ms 158.96ms 71.24% 43.78ms 56.33ms 70.32ms 99.93ms
bin/server_ruby_rails http://127.0.0.1:3000/user 0 17104 5546.09 2.85MB 0.93MB 571.76 423.77 1.46k 52.67% 44.52ms 19.42ms 164.77ms 71.23% 42.78ms 55.52ms 69.56ms 98.63ms
bin/server_ruby_rails http://127.0.0.1:3000/user/0 0 14626 4778.13 4.14MB 1.35MB 489.18 332.02 1.19k 58.33% 51.97ms 21.89ms 182.43ms 70.09% 50.48ms 64.78ms 80.26ms 112.97ms
bin/server_ruby_roda http://127.0.0.1:3000/ 0 280004 92620.47 16.82MB 5.56MB 10.74k 12.71k 53.51k 79.69% 2.07ms 2.20ms 43.33ms 88.73% 1.41ms 2.71ms 4.53ms 10.73ms
bin/server_ruby_roda http://127.0.0.1:3000/user 0 261214 84694.63 15.69MB 5.09MB 9.70k 9.32k 43.39k 80.00% 2.27ms 2.27ms 55.66ms 88.47% 1.63ms 2.98ms 4.83ms 11.02ms
bin/server_ruby_roda http://127.0.0.1:3000/user/0 0 234469 76114.76 20.80MB 6.75MB 10.07k 10.81k 61.28k 84.55% 2.38ms 2.33ms 92.51ms 89.22% 1.78ms 3.10ms 4.86ms 10.44ms
bin/server_ruby_sinatra http://127.0.0.1:3000/ 0 108746 35498.91 17.84MB 5.82MB 9.33k 6.24k 20.83k 58.62% 5.85ms 5.55ms 104.04ms 87.32% 4.29ms 7.67ms 12.38ms 26.85ms
bin/server_ruby_sinatra http://127.0.0.1:3000/user 0 100964 32847.72 16.56MB 5.39MB 3.72k 4.01k 16.37k 82.53% 6.45ms 6.41ms 132.35ms 88.41% 4.72ms 8.64ms 13.72ms 30.67ms
bin/server_ruby_sinatra http://127.0.0.1:3000/user/0 0 96008 31034.43 18.50MB 5.98MB 3.30k 4.34k 25.80k 81.44% 7.24ms 8.29ms 125.47ms 88.92% 4.79ms 9.58ms 16.29ms 40.33ms
bin/server_rust_iron http://127.0.0.1:3000/ 0 1075365 350804.65 76.92MB 25.09MB 45.05k 29.91k 94.92k 52.92% 257.44us 322.72us 18.68ms 94.95% 196.00us 297.00us 432.00us 1.22ms
bin/server_rust_iron http://127.0.0.1:3000/user 0 1029042 338417.72 73.60MB 24.21MB 57.48k 31.33k 97.11k 51.11% 258.97us 240.75us 17.73ms 92.98% 234.00us 304.00us 439.00us 0.89ms
bin/server_rust_iron http://127.0.0.1:3000/user/0 0 929470 303094.04 116.12MB 37.87MB 38.92k 26.74k 93.78k 55.83% 309.60us 464.11us 26.11ms 95.25% 216.00us 344.00us 543.00us 1.73ms
bin/server_rust_nickel http://127.0.0.1:3000/ 0 733747 241931.81 90.97MB 29.99MB 123.13k 9.99k 144.21k 63.33% 55.04us 31.32us 1.37ms 78.64% 47.00us 65.00us 113.00us 130.00us
bin/server_rust_nickel http://127.0.0.1:3000/user 0 807834 266672.08 100.15MB 33.06MB 90.31k 44.13k 133.86k 66.67% 54.94us 39.24us 12.01ms 87.82% 47.00us 59.00us 96.00us 109.00us
bin/server_rust_nickel http://127.0.0.1:3000/user/0 0 785753 259092.18 122.89MB 40.52MB 131.77k 5.30k 146.51k 71.67% 54.91us 31.03us 7.59ms 80.65% 44.00us 60.00us 101.00us 114.00us

Rankings

Ranking by Average Requests per second:

  1. 585522 req/sec : bin/server_cpp_evhtp
  2. 467455 req/sec : bin/server_nim_mofuw
  3. 431760 req/sec : bin/server_go_fasthttprouter
  4. 330772 req/sec : bin/server_rust_iron
  5. 255898 req/sec : bin/server_rust_nickel
  6. 172709 req/sec : bin/server_python_sanic
  7. 96510 req/sec : bin/server_python_japronto
  8. 84476 req/sec : bin/server_ruby_roda
  9. 65651 req/sec : bin/server_ruby_rack-routing
  10. 61167 req/sec : bin/server_crystal_router_cr
  11. 33127 req/sec : bin/server_ruby_sinatra
  12. 5248 req/sec : bin/server_ruby_rails
  13. 2088 req/sec : bin/server_python_tornado
  14. 775 req/sec : bin/server_python_flask.py
AlexWayfer commented 6 years ago

I didn't understand what problems you have with Puma (benchmark tool should kill all puma processes), but as I remember you should kill puma cluster process for killing all children puma worker processes.

OvermindDL1 commented 6 years ago

Similar yeah. The issue with the unix bit is that a program should either be a daemon (--daemon style flag, should be closed by sending a signal or command to the server, no killing needed), or it should close when stdin closes (unless options specify otherwise, in which case closing stdin should end it without killing), most of these servers do none of the above (not just puma)... ^.^;

AlexWayfer commented 6 years ago

How to make something like: CMD [ "bundle", "exec", "puma", "-p", "3000", "-e", "production", "-w $(nproc)"] ?

I'm not sure that "-w $(nproc)" executes correctly.

waghanza commented 6 years ago

@AlexWayfer sure it works, but I'm not sure about number of core to use

or could even create a pumar.rb config file with

Etc.nprocessors
AlexWayfer commented 6 years ago

@waghanza, I don't understand what's happening:

$ sudo bin/benchmarker rack-routing
Last update: 2018-05-02
OS: Linux (version: 4.16.5-1-ARCH, arch: x86_64)
CPU Cores: 4
threads: 5, requests: 5000.0
Benchmark running ...

Then 6 processes (threads?):

$ ps -eLF | grep puma
root      8378  8359  8378 21    6 90185 22740   2 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8422  0    6 90185 22740   2 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8424  0    6 90185 22740   3 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8425  0    6 90185 22740   0 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8426  0    6 90185 22740   2 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8427  0    6 90185 22740   1 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]

After 10 seconds (still Benchmark running ...) there are 11 threads:

$ ps -eLF | grep puma
root      8378  8359  8378  1   11 174675 24568  2 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8422  0   11 174675 24568  3 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8424  0   11 174675 24568  0 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8425  0   11 174675 24568  3 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8426  0   11 174675 24568  2 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8427  0   11 174675 24568  1 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8590 27   11 174675 24568  1 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8591 26   11 174675 24568  1 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8592 26   11 174675 24568  2 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8593 27   11 174675 24568  2 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
root      8378  8359  8594 26   11 174675 24568  2 22:04 pts/0    00:00:00 puma 3.11.4 (tcp://0.0.0.0:3000) [app]
OvermindDL1 commented 6 years ago

It specifies the workers, but maybe each worker has IO threads. IO threads is a common strategy in older style frameworks to poll the IO so I'd almost bet that is what it is, which if it is then that's perfectly okay.

waghanza commented 6 years ago

@AlexWayfer I will see, but rack-routing use rackup not puma (but could be configured so)

AlexWayfer commented 6 years ago

@AlexWayfer I will see, but rack-routing use rackup not puma (but could be configured so)

Rack is interface between web-servers (like Puma) and frameworks (like Flame). rackup is CLI-tool for start application with available server (WEBrick by default). So, rackup + require 'puma' = puma or pumactl.

waghanza commented 6 years ago

@AlexWayfer but rackup command line args are not the same (than puma)

waghanza commented 6 years ago

@AlexWayfer I do not understand the ouput of ps -elFF | grep puma, I use ps faux | grep puma to see the processes hirerachy (and so I have 9 processes => 8 workers and 1 master, if nproc is 8)

AlexWayfer commented 6 years ago

Oh, it looks good. Thank you, @waghanza. Is it ready to merge or something else is needed?

Also I thought about symbolic links of common files (as Dockerfile, I guess) for common Ruby frameworks (except Rails, I think). In order to reduce the number of places for future changes.

waghanza commented 6 years ago

This is mergeable

@AlexWayfer I thought about some templating for Dockerfile creation (1 template per language), but let discuss about that on gitter