squeaky-pl / japronto

Screaming-fast Python 3.5+ HTTP toolkit integrated with pipelining HTTP server based on uvloop and picohttpparser.
MIT License
8.61k stars 581 forks source link

Possible worker management issue #47

Open lvalladares opened 7 years ago

lvalladares commented 7 years ago

Hello!

I dont know if this in a real issue or im missing something, but i will leave it here just in case.

I've been doing some benchmarks between falcon, sanic and japronto just to see the differences and have a look of the actual micro frameworks speed scenario, and doing the benchmark i noticed a weird (at least weird to me) behaviour in japronto, here is the result of my tests

20 threads , 1000 connections, 60 seconds, 1 Worker
Sanic:                              RPS  13110.83
Japronto:                           RPS  39232.77

40 threads , 2000 connections, 60 seconds, 4 Workers
Sanic                                   RPS 20136.42            timeout 129
Japronto                            RPS 35381.04            timeout 0

40 threads , 10000 connections, 60 seconds, 4 Workers
Sanic                               RPS 19633.35            timeout 1843
Japronnto                           RPS 36713.26            timeout 87

The wieird thing here is the stability of RPS in japronto, i mean, with 1 worker i get 39k RPS, and with 4 workers i get 35k RPS, instead in sanic with 1 worker i get 13k and with 4, 20k.

After that i do one more test with some silly numbers, here is the command wrk -t40 -c40000 -d60s http://localhost:8400/

and i tested under the same code of japronto two times, one with 1 worker and the other with 4 workers. Here are the results:

4 Workers

40 threads and 40000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 219.26ms 138.17ms 2.00s 78.04% Req/Sec 1.11k 1.15k 32.92k 87.21% 1159322 requests in 1.07m, 107.24MB read Socket errors: connect 11553, read 941, write 0, timeout 6979 Requests/sec: 18114.80 Transfer/sec: 1.68MB

1 Worker

40 threads and 40000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 534.18ms 157.92ms 2.00s 75.33% Req/Sec 1.07k 703.97 44.91k 77.94% 1690676 requests in 1.14m, 156.40MB read Socket errors: connect 10815, read 0, write 0, timeout 5712 Requests/sec: 24667.57 Transfer/sec: 2.28MB

I do all the tests under the same docker container (i was switching between apps inside the docker) so i dont think is related to some external factors.

squeaky-pl commented 7 years ago

First doing benchmarks with Docker is a bad idea, I was also receiving unstable results with Docker. Remember that docker contains a seccomp and several other layers in between the userspace and kernel.

I never tested with such high thread and connection values. It's entirely possible that it behaves like that. Do you really have 40 cores on the machine you were testing? If you saturate your CPU before saturating I/O I wouldn't be surprised to see those values.

lvalladares commented 7 years ago

@squeaky-pl I started with some regular numbers of concurrency and after that i started doing wrong numbers, i do know that the environment where i run my tests was not optimal, but what seems weird to me is that sanic has some "linear" scalation of RPS and workers, i mean 1 worker 13k RPS, 4 workers 20k RPS, and japronto with 1 worker 39k RPS, with 4 workers 35k.

Anyway, i will repeat the test when i got some time with some less concurrency and come back here to report my results

squeaky-pl commented 7 years ago

Remember that Sanic has more Python code in it, so it won't saturate CPU as fast as Japronto does. Sanic spends more time waiting for things to happen so it gives more space for wrk to take CPU time share from the machine.