Open BlinkyStitt opened 2 years ago
This issue is also being tracked on Linear.
We use Linear to manage our development process, but we keep the conversations on Github.
LINEAR-ID: eab0a093-b095-461a-a45f-295ce51b4542
This also applies to another case!
This issue was marked as stale because it didn't have any activity in the last 30 days. If you think it's still relevant, please leave a comment indicating so. Otherwise, it will be closed in 7 days.
Still relevant.
this is very relevant for e2e testing. Running own archive node is very expensive and if you try to use alchemy or any other provider you will hit rate limit from very beginning
Bumping this
While using a free host for testing some scripts, I hit some rate limiting errors. Hardhat should make it easy to set rate limits for the forked node. If I set rate limits in my scripts, they aren't very reliable because sometimes queries to hardhat don't hit the remote node at all and I end up rate limiting myself needlessly. I also think that at other times a single query to hardhat can generate multiple queries to the forked node.
Letting the client specify X requests per Y seconds should prevent most rate limiting errors from occurring.
If api keys are shared between multiple systems, rate limits could still be hit. Looking at things like x-rate-limit headers are better for that. And building something that automatically responds to HTTP 429 or x-rate-limit headers or similar might work some places, but every provider seems to work a little different. One error I got did not have the HTTP error code that I expected; instead it had a json error of
{'code': -32005, 'message': 'Too Many Requests error received from speedy-nodes-nyc.moralis.io'}
(though the docs say they do set the headers). Because some provider's load balancers give errors in inconsistent ways, any sort of automatic back-off/retry will be difficult to implement.