pgjones / hypercorn

Hypercorn is an ASGI and WSGI Server based on Hyper libraries and inspired by Gunicorn.
MIT License
1.17k stars 104 forks source link

keep-alive connection re-use is still slowing down HTTP clients #167

Open thrau opened 11 months ago

thrau commented 11 months ago

Hey! I couldn't reopen the issue so creating a new one, I can definitely still reproduce https://github.com/pgjones/hypercorn/issues/64 with hypercorn 0.15.0

Here's a video:

https://github.com/pgjones/hypercorn/assets/3996682/325ae89e-f8ac-4052-ad47-26c3eabf7504

Again, all I have is this requirements.txt

Quart
hypercorn

and this server file:

import asyncio

from quart import Quart, request

app = Quart(__name__)

@app.route("/", methods=["POST", "GET"])
async def echo():
    data = await request.data
    await asyncio.sleep(0.005)  # simulate handler chain
    return '<?xml version=\'1.0\' encoding=\'utf-8\'?>\n<ListQueuesResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"><ListQueuesResult /><ResponseMetadata><RequestId>Y05BFPBE0ZMIURXXPY35R57NELYBSPCNY6XBU0FR0A6ITR27LZUJ</RequestId></ResponseMetadata></ListQueuesResponse>'

def main():
    app.run()

if __name__ == '__main__':
    main()

which i run with python 3.11 and python server.py

thrau commented 11 months ago

just noticed i was running quart dev mode, still getting the same issue when serving through hypercorn though:

hypercorn server:app

image

dfangl commented 11 months ago

I can reproduce this. System (Arch):

Linux arch-desktop 6.6.4-arch1-1 #1 SMP PREEMPT_DYNAMIC Mon, 04 Dec 2023 00:29:19 +0000 x86_64 GNU/Linux

Pip freeze:

aiofiles==23.2.1
blinker==1.7.0
click==8.1.7
Flask==3.0.0
h11==0.14.0
h2==4.1.0
hpack==4.0.0
Hypercorn==0.15.0
hyperframe==6.0.1
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.3
priority==2.0.0
Quart==0.19.4
Werkzeug==3.0.1
wsproto==1.2.0

With Connection: close:

$ hey -c 1 -H "Connection: close" http://localhost:8000/

Summary:
  Total:    2.3260 secs
  Slowest:  0.0172 secs
  Fastest:  0.0067 secs
  Average:  0.0116 secs
  Requests/sec: 85.9850

  Total data:   52600 bytes
  Size/request: 263 bytes

Response time histogram:
  0.007 [1] |
  0.008 [3] |■
  0.009 [4] |■
  0.010 [7] |■■
  0.011 [8] |■■■
  0.012 [114]   |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.013 [39]    |■■■■■■■■■■■■■■
  0.014 [14]    |■■■■■
  0.015 [6] |■■
  0.016 [3] |■
  0.017 [1] |

Latency distribution:
  10% in 0.0105 secs
  25% in 0.0111 secs
  50% in 0.0115 secs
  75% in 0.0124 secs
  90% in 0.0131 secs
  95% in 0.0142 secs
  99% in 0.0159 secs

Details (average, fastest, slowest):
  DNS+dialup:   0.0021 secs, 0.0067 secs, 0.0172 secs
  DNS-lookup:   0.0013 secs, 0.0003 secs, 0.0046 secs
  req write:    0.0001 secs, 0.0000 secs, 0.0003 secs
  resp wait:    0.0092 secs, 0.0060 secs, 0.0149 secs
  resp read:    0.0002 secs, 0.0001 secs, 0.0005 secs

Status code distribution:
  [200] 200 responses

With Connection: keep-alive

$ hey -c 1 -H "Connection: keep-alive" http://localhost:8000/

Summary:
  Total:    10.0577 secs
  Slowest:  0.0534 secs
  Fastest:  0.0081 secs
  Average:  0.0503 secs
  Requests/sec: 19.8852

  Total data:   52600 bytes
  Size/request: 263 bytes

Response time histogram:
  0.008 [1] |
  0.013 [0] |
  0.017 [0] |
  0.022 [0] |
  0.026 [0] |
  0.031 [0] |
  0.035 [0] |
  0.040 [0] |
  0.044 [0] |
  0.049 [3] |■
  0.053 [196]   |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

Latency distribution:
  10% in 0.0499 secs
  25% in 0.0500 secs
  50% in 0.0500 secs
  75% in 0.0501 secs
  90% in 0.0533 secs
  95% in 0.0533 secs
  99% in 0.0534 secs

Details (average, fastest, slowest):
  DNS+dialup:   0.0000 secs, 0.0081 secs, 0.0534 secs
  DNS-lookup:   0.0000 secs, 0.0000 secs, 0.0010 secs
  req write:    0.0000 secs, 0.0000 secs, 0.0002 secs
  resp wait:    0.0085 secs, 0.0060 secs, 0.0107 secs
  resp read:    0.0417 secs, 0.0001 secs, 0.0451 secs

Status code distribution:
  [200] 200 responses

Not as dramatically as for @thrau , but my machine isn't the newest :)

synodriver commented 11 months ago

What caused the difference between two situations? Maybe py-spy can help find out bottle neck, I'll have a try later.