I need to download one hundred files from S3 and I'm using aioboto3.
After the first 50-60 files (the number also varies from time to time) I get this error for some files:
Caught retryable HTTP exception while making metadata service request to http://169.254.169.254/latest/meta-data/iam/security-credentials/REDACTED:
Traceback (most recent call last):
File "/home/ubuntu/ENV/lib/python3.8/site-packages/aiobotocore/utils.py", line 79, in _get_request
async with session.get(url, headers=headers) as resp:
File "/home/ubuntu/ENV/lib/python3.8/site-packages/aiohttp/client.py", line 1117, in __aenter__
self._resp = await self._coro
File "/home/ubuntu/ENV/lib/python3.8/site-packages/aiohttp/client.py", line 619, in _request
break
File "/home/ubuntu/ENV/lib/python3.8/site-packages/aiohttp/helpers.py", line 656, in __exit__
raise asyncio.TimeoutError from None
asyncio.exceptions.TimeoutError
repeated many times and than:
Max number of attempts exceeded (1) when attempting to retrieve data from metadata service
I'm using it in the wrong way or we are hitting some limit of the metadata service?
8.2.0
Python 3.8.5
Ubuntu 20.04.1 LTS
Description
I need to download one hundred files from S3 and I'm using
aioboto3
. After the first 50-60 files (the number also varies from time to time) I get this error for some files:repeated many times and than:
I'm using it in the wrong way or we are hitting some limit of the metadata service?
I think It's related to https://github.com/aio-libs/aiobotocore/issues/808