Open mrcarv opened 8 years ago
Could explicit response closing solve your problem?
I mean adding resp.close()
after await resp.read()
call.
No. I tried both close and release. Also, read calls one of them internally.
Hmm. I recall something weird with proxies.
Please try connector = aiohttp.TCPConnector(verify_ssl=False, force_close=True)
It's even worse. At every iteration you get 5 new entries (SelectorSocketTransport) in the _acquired. So it stops after only 4 iterations. It seems to me that whatever method is responsible for taking the SSLProtocolTransport out of the _acquired when the response is released doesn't do the same for the SelectorSocketTransport.
related to #1568 ?
seems some servers does not complete all shutdown procedure in that case asyncio never closes transport.
Long story short
Making multiple simultaneous requests to https url through http proxy ends up filling TCPConnector's acquired set and it stops working.
Expected behaviour
That it should be possible to make as many simultaneous requests as it is set in the Connector's limit parameter and that it should reuse connections and/or clean up after use. And if the simultaneous limit is hit, the requests should wait in a queue.
Actual behaviour
TCPConnector keeps SelectorSocketTransports in it's _acquired set. After a while, it inserts a new 'set', until it fills the limit and stops working.
Steps to reproduce
If you set number_of_requests to more than 20, it stops at the first iteration. If you run with 5, for example, it seems to be working until 500 requests are made, then acquired raises to 10 and if kept running, eventually it will hit the limit and stop.
Your environment
Linux, Python 3.5.2, aiohttp 1.0.5