squeaky-pl / japronto

Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.
MIT License
8.61k stars 581 forks source link

Benchmarks without pipelining #8

Closed channelcat closed 7 years ago

channelcat commented 7 years ago

Trying to get the same results, I installed python 3.6 and work from source on a c4.2xlarge ubuntu 16.04.1 ec2 instance.

Pipelining:

$ wrk -c 100 -t 1 -d 4 -s pipeline.lua http://localhost:8080/
Running 4s test @ http://localhost:8080/
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.92ms  510.35us   3.63ms   34.81%
    Req/Sec     1.18M     8.21k    1.19M    75.00%
  4692000 requests in 4.00s, 411.67MB read
Requests/sec: 1172674.58
Transfer/sec:    102.89MB

No Pipelining:

$ wrk -c 100 -t 1 -d 10 http://localhost:8080/
Running 10s test @ http://localhost:8080/
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   446.49us  154.19us   1.10ms   60.49%
    Req/Sec   214.30k     3.97k  216.51k    98.00%
  2132469 requests in 10.00s, 187.10MB read
Requests/sec: 213220.48
Transfer/sec:     18.71MB

With pipelining, I'm able to get 1.2M (this is awesome btw!). However, I'm not able to achieve 400k req/sec without pipelining. Can you share more of how you tested? Thanks!

agalera commented 7 years ago

rel: https://github.com/squeaky-pl/japronto/issues/3

I have tried the pip version and I have also tried to compile myself. The compiled version seems to have better performance

squeaky-pl commented 7 years ago

Hi, this looks valid as other people are also unable to replicate this. I cannot run benchmarks currently myself.I suspect that when I started optimizing towards pipelining I overoptimized towards it and non pipelined numbers suffered. I didnt look at non-pipeline result in two weeks. I am gonna track down why that happened and when later and hope to restore non-pipelined performance.

squeaky-pl commented 7 years ago

Hi, I tried to hunt down the problem but it looks like I was gravely mistaken. I am really sorry for the confusion, I didnt mean to misguide anyone. I started linking to your results now. The performance on a revision before I started working on pipelining is close enough to the one you obtained. I was either running the non-pipelined benchmarks on a different kind of hardware(happened to me several times by mistake when launching with AWS wizard) or I simply read the results wrong when tired.

I decided to focus on non-pipelined performance for the next release and I will advertise non-pipelined results in the next round of benchmarks.

I am closing this in favor of https://github.com/squeaky-pl/japronto/issues/21 which has a nice graph.

scheung38 commented 5 years ago

I don't see the 1.2million Requests/sec:

$ wrk -t1 -c100 -d2  http://0.0.0.0:5000
Running 2s test @ http://0.0.0.0:5000
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.40ms  552.67us   9.76ms   79.80%
    Req/Sec    39.41k   844.44    40.80k    71.43%
  82287 requests in 2.10s, 7.22MB read
Requests/sec:  39126.12
Transfer/sec:      3.43MB

$ wrk -t1 -c100 -d2 -s japronto/misc/pipeline.lua http://0.0.0.0:5000
Running 2s test @ http://0.0.0.0:5000
  1 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    22.40ms   12.21ms  50.93ms   58.20%
    Req/Sec    57.30k     6.06k   65.45k    55.00%
  115224 requests in 2.03s, 10.11MB read
Requests/sec:  56865.53
Transfer/sec:      4.99MB

Mac OSX 2.7GHz RAM 16GB v10.14.3