Closed schelv closed 5 months ago
Attention: 32 lines
in your changes are missing coverage. Please review.
Comparison is base (
4406cc0
) 35.23% compared to head (a76f8ef
) 35.25%. Report is 1 commits behind head on master.
Files | Patch % | Lines |
---|---|---|
githubkit/core.py | 42.85% | 16 Missing :warning: |
githubkit/retry.py | 63.88% | 13 Missing :warning: |
githubkit/config.py | 66.66% | 3 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
It seems concurrent restriction should consider both sync/async usage and multi-instance usage. The concurrent restriction mechanism may need external service like redis. Currently, i would like to add a simple retry logic when rate limit exceeded or server error encountered.
It seems concurrent restriction should consider both sync/async usage and multi-instance usage. The concurrent restriction mechanism may need external service like redis. Currently, i would like to add a simple retry logic when rate limit exceeded or server error encountered.
Considering both sync and async usage should be possible. I will try to add that. Taking multi-instance usage into consideration at the mechanism level seems a lot more complex. A simplistic way to "consider multi-instance usage" without hitting too many rate limits, is by limiting the number of concurrent connections for each of the instances. Alternatively the number of concurrent connections could be lowered when approaching the primary rate limit. This would leave some requests for other instances. But I'm not sure if this is something that should be handled in this mechanism.
The benefit also depends a lot on the what each of the instances is doing with the api. Do you have a scenario/use-case in mind with multiple instances?
multiple instances are commonly used in multi-process mode or cluster mode. octokit also uses redis to schedule the requests in cluster mode. Maybe an abstract storage layer should be implemented to cache infos and restrict the concurrency. I will try to implement an in-memory storage and a redis one.
multiple instances are commonly used in multi-process mode or cluster mode. octokit also uses redis to schedule the requests in cluster mode. Maybe an abstract storage layer should be implemented to cache infos and restrict the concurrency. I will try to implement an in-memory storage and a redis one.
The advised "best practice" is to not do anything concurrently at all. Is that something that you want to have? It sounds very very slow.
octokit uses the bottleneck library to schedule the request job. you can see the rate limit logic in the library.
Due to the complexity of this feature, I'm going to split it into two PRs. In this pr, rate limit auto retry will be implemented and the concurrency limit will be implemented in the next version
This PR introduces a mechanism that limits the number of concurrent requests. Additionally, if a
RateLimitExceeded
response is encountered, new requests are not started for a while. I'm not sure if halting the new requests is really needed, but it seems the right thing to do.The print statements are there currently for seeing that the mechanism works. Should they be replace with a logger? or removed entirely?
Related to #66