Open harshitsinghai77 opened 2 years ago
I'm having the exact same issue with python 3.7.4, aiohttp 3.5.4, multidict 4.5.2, yarl 1.3.0
Is there any solution?
This only happens when we query the database inside a for loop...
Anyhow, I switched it back to the official clickhouse python driver. Which is synchronous in nature, but gets the job done.
It doesn't happen to me while using CH. I get the exact same error, also using ClientSession, But when I use regular HTTP requests (session.post). It also happens only to a portion of the requests.
aiohttp 3.7 is EOL and won't get any update. Is this happening under aiohttp 3.8?
Also, try asking that library you use (aiochclient). Maybe they pass invalid args to aiohttp.
File "/usr/local/lib/python3.7/site-packages/aiochclient/http_clients/aiohttp.py", line 38, in post_no_return
async with self._session.post(url=url, params=params, data=data) as resp:
There's not enough information provided to guess what's happening but w/o understanding what exactly is passed, it's a lost cause. We need an aiohttp-only reproducer demonstrating that this problem actually exists. Without that, we'll probably have to just close this as it does not demonstrate a bug the way it is reported.
Current judgment — this is likely a problem in that third-party library, maybe they misuse aiohttp.
I wasn't using aiochclient, but straight forward aiohttp. With it, I would send http requests to an nginx that proxies me to different containers (faas).
I was able to solve the issue, by looking at the nginx logs at the same time I would receive those exceptions in my app, and see that I receive these errors:
[alert] 7#7: 1024 worker_connections are not enough [alert] 7#7: *55279 1024 worker_connections are not enough while connecting to upstream
To solve this, with a little help from Google, I added to my nginx.conf file: events { worker_connections 10000; }
Thanks anyways!
I'm also getting this error, although for minor chunk of requests under a for loop. I'm using aiohttp 3.8.1.
Hello, we are currently facing this issue where we have repeating jobs that run at intervals; each job makes some request (mostly POST requests).
This has been happening ever since we migrated to aiohttp, a fix was to use aiohttp.TCPConnector(force_close=True)
or by using http1.0 aiohttp.ClientSession(version=http.HttpVersion10)
but we had like to reuse connections without force closing for every request.
From my investigation on the network side it shows that the client fails return a companing ACK
packet after already exchanging a FIN
and FIN ACK
packet to the server which results in the server sending a RST
packet as a way to graceful close the connection.
version: aiohttp==3.8.1
Any help to resolving this would be appreciated.
We are also facing this issue, it happens from time to time. We haven't investigated as far as @beesaferoot.
Version: aiohttp==3.8.1
Python: 3.10.4
Hello, we also have the problem on our application (~20 req/s) for 1 every ~500 to 1000 requests. setting the TCPConnector and/or Http version didn't solved the issue. The fix for us was to catch the exception and retry for now.
Python 3.9 and aiohttp 3.8.1
import asyncio
import io
import os
import aiohttp
from tqdm.asyncio import tqdm
URL = 'http://your-ip:3000/upload'
async def chunks(data, chunk_size):
with tqdm.wrapattr(io.BytesIO(data), 'read', total=len(data)) as f:
chunk = f.read(chunk_size)
while chunk:
yield chunk
chunk = f.read(chunk_size)
async def download(session, chunk_size):
data_to_send = os.urandom(30_000_000)
data_generator = chunks(data_to_send, chunk_size)
await session.post(URL, data=data_generator)
async def main():
async with aiohttp.ClientSession() as session:
tasks = []
for _ in range(5):
t = asyncio.create_task(download(session, 4096))
tasks.append(t)
await asyncio.gather(*tasks)
asyncio.run(main())
I am trying to make a CLI client for OpenSpeedTest-Server I am getting same error like this. to reproduce this use our DOCKER IMAGE or Android App. then make a post request to "http://your-ip:3000/upload" issues : For docker image it will only send first chunk for Android app it will throw error like this.
Traceback (most recent call last):
File "r.py", line 35, in <module>
asyncio.run(main())
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "r.py", line 32, in main
await asyncio.gather(*tasks)
File "r.py", line 23, in download
await session.post(URL, data=data_generator)
File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/client.py", line 559, in _request
await resp.start(conn)
File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/client_reqrep.py", line 898, in start
message, payload = await protocol.read() # type: ignore[union-attr]
File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/streams.py", line 616, in read
await self._waiter
aiohttp.client_exceptions.ClientOSError: [Errno 32] Broken pipe
It is working fine when using Electron Apps of OpenSpeedTest-Server (Windows, Mac and Linux GUI Server apps) it uses Express server.
Mobile Apps uses iOnic WebServer, for Android it's NanoHTTP Server and for iOS it is GDC WebServer. for Docker we use Nginx WebServer. Configuration posted on my profile.
Same.
python: 3.10
aiohttp: 3.8.3
aiochclient: 2.2.0
@asvetlov any news ?
it's been 3yrs can we get any update??
Hello, we are currently facing this issue where we have repeating jobs that run at intervals; each job makes some request (mostly POST requests).
This has been happening ever since we migrated to aiohttp, a fix was to use
aiohttp.TCPConnector(force_close=True)
or by using http1.0aiohttp.ClientSession(version=http.HttpVersion10)
but we had like to reuse connections without force closing for every request.From my investigation on the network side it shows that the client fails return a companing
ACK
packet after already exchanging aFIN
andFIN ACK
packet to the server which results in the server sending aRST
packet as a way to graceful close the connection.version:
aiohttp==3.8.1
Any help to resolving this would be appreciated.
@beesaferoot could you provide me the reproduction code, i will try to make an pr fixing this if i can solve this issue but for that i need a code which produce this constantly
There is no update. If someone can create a PR with a test that reproduces the error, then we can look into it, but we really don't have the time to try and figure anything out from the above comments.
https://github.com/aio-libs/aiohttp/issues/6138#issuecomment-1009164970 suggests that the receiving end ran out of connections and so the connection got rejected (if that's the case, I'm not really sure there's a bug here...).
While https://github.com/aio-libs/aiohttp/issues/6138#issuecomment-1171170516 suggests that there could be an issue with keep-alive connections (which makes it sound like a different issue to the previous comment...). If we can get a test that reproduces these steps, then maybe we can fix something..
So in my case this error was not of this library it's cloudflare which has max file size upload per request.
I think, whoever is getting this error, the reason is that the website you are making POST
request is using cloudflare, so the it's upload limit implies too.
I was getting this issue when repeating requests in short period of time.
In my case manually clossing session after every request helped.
I have fixed this bug by creating try except block in a while loop with sleep and retry:
# init
conn = aiohttp.TCPConnector(limit_per_host=30)
self.__session = aiohttp.ClientSession(
self.__url,
# timeout=self.__timeout,
raise_for_status=True,
connector=conn,
)
# method
ids_info = None
retries = 0
while not ids_info:
try:
async with (
self.__session.get(
self.__path, json={"ids": ids}
) as response
):
if response.status == 200:
data = await response.json(content_type="text/plain")
ids_info = data["info"]
if not ids_info:
return dict()
else:
return ids_info
# if not 200
else:
return dict()
except ClientOSError as e:
logger.exception(f"retry number={retries} with error: {e}")
retries += 1
if retries >= self.__max_retries:
return dict()
await asyncio.sleep(1)
but I do not think it is proper way. The main thing I have noticed, that I this error occurs at a random time, so I can not reproduce it.
I faced this issue while I was trying to proxy my requests to a server and I figured it out that proxy server wasn't able handle that amount of requests. It could be that others are facing same kind of issue. Maybe try rate limiting your requests.
same here, and i'm sure that the server is ok since i have a benchmark with heavy request and it works well in that case.
Wanted to ➕ this issue. My context in case it helps:
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 578, in write_bytes
await self.body.write(writer)
File "/usr/local/lib/python3.10/site-packages/aiohttp/payload.py", line 247, in write
await writer.write(self._value)
File "/usr/local/lib/python3.10/site-packages/aiohttp/http_writer.py", line 115, in write
self._write(chunk)
File "/usr/local/lib/python3.10/site-packages/aiohttp/http_writer.py", line 75, in _write
raise
ConnectionResetError: Cannot write to closing transport
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
result = await app( # type: ignore[func-returns-value]
...
File "/lib/ufmodels/utils/httpclients/http_client.py", line 49, in post
async with self.session.post(url, json=json, headers=headers) as resp:
File "/usr/local/lib/python3.10/site-packages/aiohttp/client.py", line 1167, in __aenter__
self._resp = await self._coro
File "/usr/local/lib/python3.10/site-packages/aiohttp/client.py", line 586, in _request
await resp.start(conn)
File "/usr/local/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 905, in start
message, payload = await protocol.read() # type: ignore[union-attr]
File "/usr/local/lib/python3.10/site-packages/aiohttp/streams.py", line 616, in read
await self._waiter
aiohttp.client_exceptions.ClientOSError: [Errno None] Can not write request body for
So in my case the downstream error is stated to be caused by ConnectionResetError: Cannot write to closing transport
.
I'm using Python 3.10 and aiohttp 3.8.6. We're going to switch soon to Python 3.11 and latest aiohttp and will post back if we continue to run into this.
Re proposed fixes mentioned in this thread:
We updated to Python 3.11 and aiohttp 3.9.5 but still see this issue.
There have been several fixes, so if someone can retest on 3.10.5 and see if the issue persists that be good. If it does, we still really need an actual reproducer.
I tried upgrading to latest aiohttp (3.10.5) but still seeing some of the errors in our logs.
fwiw - it looks like we also have another error popping up (may not be related to this one) - sharing in case it means anything:
There was an unexpected exception ServerDisconnectedError.
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/anyio/streams/memory.py", line 105, in receive
return self.receive_nowait()
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/anyio/streams/memory.py", line 100, in receive_nowait
raise WouldBlock
anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/anyio/streams/memory.py", line 118, in receive
return receiver.item
^^^^^^^^^^^^^
AttributeError: 'MemoryObjectItemReceiver' object has no attribute 'item'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py", line 159, in call_next
message = await recv_stream.receive()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/anyio/streams/memory.py", line 120, in receive
raise EndOfStream
anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/lib/app/main.py", line 136, in log_request_and_response
response = await call_next(request)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py", line 165, in call_next
raise app_exc
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py", line 151, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "/usr/local/lib/python3.11/site-packages/prometheus_fastapi_instrumentator/middleware.py", line 174, in __call__
raise exc
File "/usr/local/lib/python3.11/site-packages/prometheus_fastapi_instrumentator/middleware.py", line 172, in __call__
await self.app(scope, receive, send_wrapper)
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
...
File "/lib/ufmodels/utils/httpclients/http_client.py", line 165, in post
return await self._callable_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/ufmodels/utils/httpclients/http_client.py", line 154, in _callable_with_retry
raise ex
File "/lib/ufmodels/utils/httpclients/http_client.py", line 135, in _callable_with_retry
return await func(endpoint)
^^^^^^^^^^^^^^^^^^^^
File "/lib/ufmodels/utils/httpclients/http_client.py", line 63, in post
async with self.session.post(url, json=json, headers=headers) as resp:
File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 1353, in __aenter__
self._resp = await self._coro
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 684, in _request
await resp.start(conn)
File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 999, in start
message, payload = await protocol.read() # type: ignore[union-attr]
^^^^^^^^^^^^^^^^^^^^^
INFO: 10.253.28.208:37570 - "POST /fmap/intersect-bidlists HTTP/1.1" 500 Internal Server Error
File "/usr/local/lib/python3.11/site-packages/aiohttp/streams.py", line 640, in read
await self._waiter
aiohttp.client_exceptions.ServerDisconnectedError: Server disconnected
Also see this appearing still with latest aiohttp 3.10.5 and Python 3.11.8.
Traceback (most recent call last):
File "/Users/me/Git/project/test/.venv/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 637, in write_bytes
await self.body.write(writer)
File "/Users/me/Git/project/test/.venv/lib/python3.11/site-packages/aiohttp/payload.py", line 246, in write
await writer.write(self._value)
File "/Users/me/Git/project/test/.venv/lib/python3.11/site-packages/aiohttp/http_writer.py", line 115, in write
self._write(chunk)
File "/Users/me/Git/project/test/.venv/lib/python3.11/site-packages/aiohttp/http_writer.py", line 75, in _write
raise ConnectionResetError("Cannot write to closing transport")
ConnectionResetError: Cannot write to closing transport
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/me/Git/project/test/test-cli.py", line 180, in query
async with session.post(
File "/Users/me/Git/project/test/.venv/lib/python3.11/site-packages/aiohttp/client.py", line 1353, in __aenter__
self._resp = await self._coro
^^^^^^^^^^^^^^^^
File "/Users/me/Git/project/test/.venv/lib/python3.11/site-packages/aiohttp/client.py", line 684, in _request
await resp.start(conn)
File "/Users/me/Git/project/test/.venv/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 999, in start
message, payload = await protocol.read() # type: ignore[union-attr]
^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Git/project/test/.venv/lib/python3.11/site-packages/aiohttp/streams.py", line 640, in read
await self._waiter
aiohttp.client_exceptions.ClientOSError: [Errno None] Can not write request body for http://localhost:8085/api/v1/query
They appear to be completely separate issues. Can either of you provide a reproducer to debug?
@sjoerd222888 Your issue may be a problem with the server and could be unsolvable by us. My guess is that the server is sending a Connection: keep-alive
response and then closing the connection anyway, which leads to the connection getting reused for the next request and then it fails. I'd need a reproducer to be sure though. In 3.10 we have retry logic to deal with this case, but POST is not idempotent so we can't retry automatically (although, maybe if it's before we send the body it would be fine, might have to take another look...). A workaround would be to use TCPConnector(force_close=True)
to disable keep-alive (assuming that is actually the issue). Obviously, the best option would be to fix the remote server so it actually responded accurately.
My remote server happens to be a locally running fastAPI application. Don't think there should be any issue with that but will see how could I check that.
You need to dig around in aiohttp internals a bit to figure it out exactly, so giving me a reproducer might be easier.
But, you could check the response headers, to see if there is a Connection: close
header. If not, then the server should be treating it as a keep-alive connection (on HTTP/1.1). You could also try printing out client.connector._conns between requests to see which connections are still open.
By editing aiohttp code, you could maybe add a print to connection_lost() (in client_proto.py) to see when the connection is closed by the server.
although, maybe if it's before we send the body it would be fine, might have to take another look...
This won't be possible, as the traceback suggests we've sent the headers already and have no way to know if the server received them or not...
My remote server happens to be a locally running fastAPI application. Don't think there should be any issue with that but will see how could I check that.
I think FastAPI only uses ASGI? In which case the problem is likely with the ASGI runner, rather than the framework in this case. I suspect you'd fail to reproduce the issue with an aiohttp server in any case.
I had this issue with plenty of servers already, some our outside of my control. It would be nice if there could be a grace handling of the issue, other than simply TCPConnector(force_close=True)
.
If that is the issue, what do you propose? I've just explained why we can't fix it, unless it's an idempotent method we can retry automatically.
Also worth noting the spec here:
A server that does not support persistent connections MUST send the "close" connection option in every response message that does not have a 1xx (Informational) status code. https://www.rfc-editor.org/rfc/rfc9112#section-9.3-5
We can't really be expected to work perfectly with a server that doesn't actually implement the HTTP/1.1 spec. I think this is the best effort we can do.
Describe the bug
To Reproduce
Expected behavior
I'm using these methods again and again inside a for loop.
These work most of the time but sometimes
aiohttp
throws an error.Logs/tracebacks
Python Version
aiohttp Version
multidict Version
yarl Version
OS
Linux Debian
Related component
Client
Additional context
No response
Code of Conduct