Closed 0x4007 closed 2 months ago
Since this repo should be soon deprecated, do you want a fix here? I guess this problem is also relevant to the new system as well.
Yeah research and implementation should be handled wherever possible and can be ported easily later if needed.
I did a research on this today. Starting from where and why it happens to how to solve:
The root cause for late limit is that number of queued calls for JsonRPCProvider used crosses the 500 calls in the queue limit.
The actual rate limit error from the permit-generation
RPC provider was thrown by Nethermind client
which is set to 500 calls as a default on the client side:
https://github.com/NethermindEth/nethermind/blob/489f3277eddfba5b514d2c7779094b6981ec629e/src/Nethermind/Nethermind.Init/Steps/RegisterRpcModules.cs#L113 https://github.com/NethermindEth/nethermind/blob/489f3277eddfba5b514d2c7779094b6981ec629e/src/Nethermind/Nethermind.JsonRpc/JsonRpcConfig.cs#L18
One of the ways to circumvent this is to use ethers module throttling feature, but as discovered in the ethers implementation, kicks in only if the 429
error code is returned in the response header, see:
https://github.com/ethers-io/ethers.js/commit/7d435453039f009b339d835ddee47e35a843711b
and if (response.statusCode === 429 && attempt < attemptLimit)
We can use the current provider as a default. If rate limit is reached, use rpc-handler to try next fastest provider from the list.
I can experiment, first try to reproduce and error on a fork or using a test and submit a fix today / tomorrow.
/start
Deadline | Tue, Apr 16, 10:34 AM UTC |
Registered Wallet | 0x7e92476D69Ff1377a8b45176b1829C4A5566653a |
/wallet 0x0000...0000
if you want to update your registered payment wallet address.@gentlementlegen , just to be sure we are not crossing work on tasks, please let me know if you started working on this already I can as well pass the task to you.
@gitcoindev nop just did some research but no work done, all yours!
hi @Keyrxng I have been experimenting with https://github.com/ubiquity/rpc-handler this week and I get kind of unstable results when I run actions https://github.com/korrrba/comment-incentives/actions (hope you have access there if not I can grant you access). I added some loops during integration to load / stress test and I usually have to run the action twice to become green , I often get Error: could not detect network (event="noNetwork", code=NETWORK_ERROR, version=providers/5.7.2)
.
I have three questions about rpc-handler, which I think is a great tool:
1) I used import { RPCHandler } from "@keyrxng/rpc-handler";
. I understand that I should use https://github.com/ubiquity/rpc-handler/pull/6 after merged and deployed to npm registry instead?
2) Do you have a working reference integration somewhere already to any bot plugins?
3) If we get rate limited my idea to select next fastest rpc provider. I had a look at tests/benchmark.test.ts and I see you sort by latencies, perhaps we could have a function that would returned list of providers sorted by latency, then we could just iterate to get next available fastest rpc, do you have a similar or other idea on this?
https://www.npmjs.com/package/@keyrxng/rpc-handler (I've published the polymorphic pkg already under my name, maybe that'll respond better for you) but yeah the updated package is better
a server-side integration I have not done yet no, closest is the test cases. I noticed in the feeble stress tests I done that first touch was always far slower/error prone than subsequent touches. I assume you are being rate limited after a few successful calls to a particular RPC or is it erroneously returning a provider which cannot be called in the first place?
This is implemented in the upgrade already yeah (I click test multiple times which is why they are changing)
https://github.com/ubiquibot/permit-generation/assets/106303466/8aab5a24-689d-470b-ab73-3dcce3f736ed
I gave some more thought over this over the weekend, and found a better, I think a proper solution for rate limits that can be applied to any RPC without switching the provider.
I designed a simple load testing code that almost always causes any rpc provider to hit the rate limit:
import { JsonRpcProvider } from "@ethersproject/providers";
import { useHandler } from "./helpers/rpc-handler";
import { getTokenSymbol } from "./helpers/contracts";
export async function rateLimitTest() {
let tokenSymbol;
for (let i=1; i<=1000; i++) {
tokenSymbol = getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
console.log(i, tokenSymbol);
}
tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
console.log(tokenSymbol);
}
The code triggers 1000 RPC calls / promises to getTokenSymbol, which are left in the pending state. Then calls 3 times the same RPC call with await, waiting for the result. The provider overloaded with the previous calls hits the rate limit.
I evaluated a few possible solutions. When throttling does not help, it is best to simply wait and retry RPC call after a delay. I selected two possible typescript libraries that can help to achieve this: https://github.com/sindresorhus/p-retry and https://github.com/franckLdx/ts-retry , both in active development and having healthy weekly downloads.
The drawback of p-retry is that it provides ESM only module and I encountered commonjs / esm coexistence hell with Jest and Babel (https://github.com/jestjs/jest/issues/13739). The functionality works, but tests failed. On the other hand ts-retry provides support both for ESM and commonjs and contains a few additional useful functions and decorators.
For example it is possible to retry async function until it returns a 'defined' value, e.g.
import { retryAsyncUntilDefined } from "ts-retry";
import { JsonRpcProvider } from "@ethersproject/providers";
import { useHandler } from "../../helpers/rpc-handler";
const provider = await retryAsyncUntilDefined<JsonRpcProvider>(
async () => {
const rpcHandler = useHandler(config.payments.evmNetworkId);
return await rpcHandler.getFastestRpcProvider()},
{ delay: 1000, maxTry: 5 }
);
The code above will retry five times waiting 1 second between each retry to select the fastest rpc provider if the result was returned as undefined for any reason, e.g. network lost.
The solution for my initial load test using ts-retry is :
let tokenSymbol;
for (let i=1; i<=1000; i++) {
tokenSymbol = getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
console.log(i, tokenSymbol);
}
tokenSymbol = await retryAsync<string>(
async () => await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider),
{ delay: 1000, maxTry: 5 }
);
tokenSymbol = await retryAsync<string>(
async () => await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider),
{ delay: 1000, maxTry: 5 }
);
tokenSymbol = await retryAsync<string>(
async () => await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider),
{ delay: 1000, maxTry: 5 }
);
console.log(tokenSymbol);
which always passes the test:
console.log
999 Promise { <pending> }
at log (src/index.test.ts:72:13)
console.log
1000 Promise { <pending> }
at log (src/index.test.ts:72:13)
console.log
WXDAI
at log (src/index.test.ts:88:11)
console.log
done
at log (src/index.test.ts:95:15)
PASS src/index.test.ts (28.501 s)
✓ runs (26768 ms)
Test Suites: 1 passed, 1 total
Therefore I will open a pull request to this repo adding ts-retry retryAsync wrappers to RPC provider setup and RPC calls, this should fix rate limit hits. A similar solution can be later applied to any rate limit hit scenarios in other repositories / plugins.
/start
! Skipping '/start' because the issue is already assigned.
@gitcoindev thanks for your research. While this works, it really depends on the endpoint for the duration you have to wait until the next call is successful. I had cases where it was seconds, and other where it was minutes. Benefits of switching RPC is to have it available right away. Would this make the implementation way more complex?
Good research but I'm always skeptical of "time based" solutions compared to "event based"
What @gentlementlegen mentions is an example of why this solution might not be the best approach.
@0x4007 @gentlementlegen sure I will rework this to switch RPC. Perhaps combine two approaches together, it should make it even more robust.
I tested multiple times and the following always seems to work: for any call use the default provider, in case of an error immediately switch to the fastest available provided by rpc handler using onError: async () => provider = await rpcHandler.getFastestRpcProvider()
.
export async function rateLimitTest() {
const rpcHandler: RPCHandler = useHandler(100);
let provider = await retryAsyncUntilDefined<JsonRpcProvider>(
async () => new ethers.providers.JsonRpcProvider("https://rpc.gnosischain.com"),
{ maxTry: 5 }
);
let tokenSymbol;
for (let i=1; i<=1000; i++) {
tokenSymbol = getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
console.log(i, tokenSymbol);
}
tokenSymbol = await retryAsync<string>(
async () => await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider),
{
maxTry: 5,
onError: async () => provider = await rpcHandler.getFastestRpcProvider()
}
);
tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
console.log(tokenSymbol);
tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
console.log(tokenSymbol);
tokenSymbol = await getTokenSymbol("0xe91D153E0b41518A2Ce8Dd3D7944Fa863463a97d", provider);
console.log(tokenSymbol);
}
I will update the pull request now. In any case, we can always think of more sophisticated scenarios, but this one seems quite robust as long as the rpc handler works correctly.
/start
Deadline | Tue, Apr 23, 4:13 PM UTC |
Registered Wallet | 0x7e92476D69Ff1377a8b45176b1829C4A5566653a |
/wallet 0x0000...0000
if you want to update your registered payment wallet address.@gentlementlegen I updated https://github.com/ubiquibot/comment-incentives/pull/35/files , would be grateful if you re-reviewed, you can also test from your side.
+ Evaluating results. Please wait...
View | Contribution | Count | Reward |
---|---|---|---|
Issue | Specification | 1 | 7.2 |
Issue | Comment | 2 | 9.6 |
Comment | Formatting | Relevance | Reward |
---|---|---|---|
> ```diff > + Evaluating results. Please wait... > ``` We g... | 7.2code: count: 1 score: "1" words: 0 | 1 | 7.2 |
Yeah research and implementation should be handled wherever poss... | 3.4 | 0.59 | 3.4 |
Good research but I'm always skeptical of "time based" solutions... | 6.2 | 0.78 | 6.2 |
View | Contribution | Count | Reward |
---|---|---|---|
Issue | Comment | 3 | 9.7 |
Comment | Formatting | Relevance | Reward |
---|---|---|---|
Since this repo should be soon deprecated, do you want a fix her... | 2.6 | 0.56 | 2.6 |
@gitcoindev nop just did some research but no work done, all you... | 1.2 | 0.8 | 1.2 |
@gitcoindev thanks for your research. While this works, it reall... | 5.9 | 0.67 | 5.9 |
View | Contribution | Count | Reward |
---|---|---|---|
Issue | Task | 1 | 400 |
Issue | Comment | 7 | 0 |
Issue | Comment | 7 | 140.1 |
View | Contribution | Count | Reward |
---|---|---|---|
Issue | Comment | 1 | 13.95 |
Comment | Formatting | Relevance | Reward |
---|---|---|---|
1. https://www.npmjs.com/package/@keyrxng/rpc-handler (I've publ... | 13.95li: count: 3 score: "0.75" words: 0 | 0.76 | 13.95 |
We got rate limited https://github.com/ubiquibot/comment-incentives/actions/runs/8617933701/job/23619113189
Seems like this can be a problem as we pick up on activity in our network.
Originally posted by @0x4007 in https://github.com/ubiquibot/permit-generation/issues/5#issuecomment-2045442568