Closed L03TJ3 closed 9 months ago
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
goodwallet | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Feb 28, 2024 1:46pm |
@johnsmith-gooddollar @sirpy Open questions:
Currently when you switch chains, loggedProviders is cleared. Is this a wanted behavior or should we cache this somehow?
if 1 is yes, maybe we should clear it periodically? for example for onfinality on a per minute basis?
we are not the only ones using the MultipleHTTPPRovider. Web3Provider is passed down to WalletChat widget which also uses the httpProvider. We don't necessarily control their calls, not sure yet if this is relevant or if they do any extensive calls (not debugged far enough yet). just pointing it out to consider
export class MultipleHttpProvider extends HttpProvider {
static loggedProviders = new Map()
constructor(endpoints, config) {
const [{ provider, options }] = endpoints // init with first endpoint config
const { strategy = 'random', retries = 1 } = config || {} // or 'random'
log.debug('Setting default endpoint', { provider, config })
super(provider, options)
log.debug('Initialized', { endpoints, strategy })
assign(this, {
endpoints,
strategy,
retries,
})
}
send(payload, callback) {
const { endpoints, strategy, retries } = this
const { loggedProviders } = MultipleHttpProvider
// shuffle peers if random strategy chosen
no, we shouldn't clear periodically. each RPC url should be logged just once per user session/refresh in the case of connection error
as we decided do not implement fallback there, I think we could ignore and do not log also errors. As an option, if this provider supports events we could listen for errors and log them using the same rule (any connection issue logged once per session per each rpm url). I think we need also ask @sirpy
Description
We get reports from users where claiming fails. After QA it seems to be related to rpc-issues (after just switching chains). Besides these reports, we already have a lot of rpcs errors on sentry.
Done fixes:
Wip fixes so far:
[ ] Even if an endpoint fails, we log it (once) but we keep retrying the failed endpoint. So I think we should filter on the loggedProviders before shuffle through the potential rpc's
[ ] Added too-many-requests as connection error. Rationale: the public onfinality rpc is having this issue. The public onfinality rate limits are: Fuse: rate-10 / burst-10 per MINUTE Ethereum: rate-10 / burst-10 per MINUTE Celo: rate-10 / burst-10 per MINUTE Reference: https://documentation.onfinality.io/support/public-rate-limits So does not make sense to keep trying after x seconds since new tokens are put in the bucket on a minute basis
About # (link your issue here) https://github.com/GoodDollar/GoodDAPP/issues/4227 https://github.com/GoodDollar/GoodDAPP/issues/4226
How Has This Been Tested?
Please describe the tests that you ran to verify your changes.
Checklist: