Closed dropwhile closed 8 years ago
Mmm it seems to work here. I did a simple test though. How can I reproduce it?
Did the following test btw:
1) I launched gunicorn that way:
(gunicorn_py2)[examples] gunicorn -w 3 --threads=10 test:app
7:54:47 โ master โ ๐ โญ ๐
[2015-02-06 07:55:00 +0100] [1494] [INFO] Starting gunicorn 19.2.0
[2015-02-06 07:55:00 +0100] [1494] [INFO] Listening at: http://127.0.0.1:8000
(1494)
[2015-02-06 07:55:00 +0100] [1494] [INFO] Using worker: threads
[2015-02-06 07:55:00 +0100] [1497] [INFO] Booting worker with pid: 1497
[2015-02-06 07:55:00 +0100] [1498] [INFO] Booting worker with pid: 1498
[2015-02-06 07:55:00 +0100] [1499] [INFO] Booting worker with pid: 1499
2) and test keepalive with ab:
$ ab -c 100 -n 10000 -k http://127.0.0.1:8000/
7:55:03
This is ApacheBench, Version 2.3 <$Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software: gunicorn/19.2.0
Server Hostname: 127.0.0.1
Server Port: 8000
Document Path: /
Document Length: 14 bytes
Concurrency Level: 100
Time taken for tests: 3.707 seconds
Complete requests: 10000
Failed requests: 0
Keep-Alive requests: 10000
Total transferred: 2140000 bytes
HTML transferred: 140000 bytes
Requests per second: 2697.95 [#/sec] (mean)
Time per request: 37.065 [ms] (mean)
Time per request: 0.371 [ms] (mean, across all concurrent requests)
Transfer rate: 563.83 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.5 0 8
Processing: 4 37 21.8 37 546
Waiting: 4 36 21.8 36 546
Total: 6 37 22.0 37 549
Percentage of the requests served within a certain time (ms)
50% 37
66% 37
75% 38
80% 38
90% 40
95% 41
98% 43
99% 45
100% 549 (longest request)
$ 00venv/bin/gunicorn -w 1 --threads 3 -k sync --keep-alive 1 --worker-connections 2 test:app
[2015-02-06 09:28:29 +0000] [4543] [INFO] Starting gunicorn 19.2.1
[2015-02-06 09:28:29 +0000] [4543] [INFO] Listening at: http://127.0.0.1:8000 (4543)
[2015-02-06 09:28:29 +0000] [4543] [INFO] Using worker: threads
[2015-02-06 09:28:29 +0000] [4548] [INFO] Booting worker with pid: 4548
$ curl -v http://127.0.0.1:8000
* About to connect() to 127.0.0.1 port 8000 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:8000
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: gunicorn/19.2.1
< Date: Fri, 06 Feb 2015 09:28:35 GMT
< Connection: close
< Content-type: text/plain
< Content-Length: 14
< X-Gunicorn-Version: 19.2.1
< Test: test ัะตัั
<
Hello, World!
* Closing connection #0
I just discovered this -- it seems if I leave off worker-connections
, I do get the expected connection: keep-alive
header in the response. But when I do supply worker-connections
it results in connection: close
in the curl response. Odd.
Anyway, here is the same test with 19.1.1
# 00venv/bin/gunicorn --error-logfile=- -w 1 --threads 3 -k sync --keep-alive 1 --worker-connections 2 test:app
[2015-02-06 09:36:29 +0000] [4645] [INFO] Starting gunicorn 19.1.1
[2015-02-06 09:36:29 +0000] [4645] [INFO] Listening at: http://127.0.0.1:8000 (4645)
[2015-02-06 09:36:29 +0000] [4645] [INFO] Using worker: threads
[2015-02-06 09:36:29 +0000] [4655] [INFO] Booting worker with pid: 4655
$ curl -v http://127.0.0.1:8000
* About to connect() to 127.0.0.1 port 8000 (#0)
* Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 127.0.0.1:8000
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: gunicorn/19.1.1
< Date: Fri, 06 Feb 2015 09:30:06 GMT
< Connection: keep-alive
< Content-type: text/plain
< Content-Length: 14
< X-Gunicorn-Version: 19.1.1
< Test: test ัะตัั
<
Hello, World!
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
@benoitc is there any other information that you need/want from me?
@cactus The behaviour you see above is because the number of connections to keep alive is equal to Worker Connections - Number of threads
which is here < 0. There are at least N connections accepted at the same time.
If in the command line above you give --worker-connections 4
you will be above to keep 1 connection alive. This change has been introduced in 1f92511430b59a6645e59d408d8cb21d6d37114a so the behaviour above is correct. Was it this configuration that you tested for keep alive?
I thought the opposite was the case, where worker-connections was how many concurrent conns the child backend will accept, and threads the number of threads the child will utilize in a pool for handling connections.
Even the gunicorn docs say that threads is "The number of worker threads for handling requests" and worker-connections is "The maximum number of simultaneous clients".
So would the math not instead be threads - worker_connections >= 0
, which i have done (there should be 1 thread always spare in my example, as I wanted to ensure if all connected clients were sleeping, that heartbeats would still get a response from the parent as documented in"how many threads" here --- http://gunicorn-docs.readthedocs.org/en/latest/design.html ).
Add to this that it works as expected in versions older than 19.2, it is certainly confusing.
So you are saying that there should be more worker conns allowed than threads to handle them instead? If I want a single connection per backend, use sync+threads, and have keepalive work (nginx with http 1.1 keepalives for backends), I would need to specify one thread and two worker conns? Wouldn't that mean I am allowing two conns to the child instead, and making and tearing down a conn each time (or having them fight over one thread)?
My goal is sync worker, with one client at a time handled per backend child, with keepalives to avoid tons of socket churn (very high volume nginx->gunicorn over loopback w/nginx using http 1.1 keepalives, which work fine with gevent backend -- this is just a new app that we are testing with pypy which doesn't support gevent currently).
(sorry for the duplicate comment -- github was formatting the email reply weird so I changed to the online editor)
Looking at the code (thanks for the ref), maybe I will try a local workaround -- subclassing and monkey-patching (ugh!) in the subclass to set max_keepalive = self.cfg.threads - self.cfg.worker_connections
and see if that can fit my goals.
Why do you limit the number of worker_connections?
The reasoning in the current code is that you still need to be able to handle connections while accepting new one until the maximum of connections is achieved. If number the number of keepalived connections = maximum of connections accepted in the worker then you wouldn't e able to handle them. So instead we are refusing keep-alive connections. Maybe the logic can be changed there. Not sure how yet.
Thanks for thinking about this @benoitc. In the meantime we have decided that if we want to reduce our connection usage (the goal was to reduce tcp conn churn), we add another layer (all problems can be solved by another layer) and just stick nginx in front of gunicorn on each machine, and use a unix socket for upstream to gunicorn.
Unrelated: We have been experimenting with gunicorn and pypy lately, and the results so far have been awesome. Thanks for all your hard work! :D
@cactus thanks for the feedback :)
for now I'm not closing that issue, let see if the current algorithm could not be improved,
Keepalive functionally, when using sync worker + threads, appears to have stopped working in 19.2 (also not working in 19.2.1). I tested and it worked in 19.1.1. Was this an intentional change, or a bug?
Getting same result with python 2.7 and pypy 2.5 (2.7.x series) w/ trollius==1.0.4 and futures==2.2.0