vutran1710 / PyrateLimiter

⚔️Python Rate-Limiter using Leaky-Bucket Algorithm Family
https://pyratelimiter.readthedocs.io
MIT License
336 stars 36 forks source link

Feature request: rate limit exceptions #88

Closed Josha91 closed 1 year ago

Josha91 commented 1 year ago

Hi, I am using the pyrate-limiter to limit GitHub API calls. However, if the GitHub API returns a 304 (NOT MODIFIED) this does not count towards the rate limit. More generally, that means it would be nice to have the possibility to specify a condition upon which the rate limit is not incremented.

In my own project I'm using a workaround by using an extension of the MemoryQueueBucket that includes a pop method (so that when GitHub gives me a 304 I just pop the last item of the queue).

Please let me know if this would be a possibility, and thanks in advance!

vutran1710 commented 1 year ago

Pyrate-limiter only deals explicitly with the "rate limit" matter - and the API thing is just one of its applications. It is up for you to decide however you want to make it work. In your case, it's your responsibility to deal with the http status codes. Pyrate-limiter cant help you because it's really just a lower-level mechanism that supports your API/Request handling logic.

JWCook commented 1 year ago

I have made a small wrapper library to integrate pyrate-limiter + requests: https://github.com/JWCook/requests-ratelimiter

304 handling isn't implemented yet, but there is an open issue for that: https://github.com/JWCook/requests-ratelimiter/issues/5

You can comment there if you have more thoughts on how you'd like that to work.

In my own project I'm using a workaround by using an extension of the MemoryQueueBucket that includes a pop method (so that when GitHub gives me a 304 I just pop the last item of the queue).

Yeah, that's probably the best solution for now.

The only problem is if you have multiple concurrent requests, the item popped off the queue may not necessarily be the same one added for that specific request. A solution would be to remove the item by key (timestamp), but the AbstractBucket interface currently doesn't support that. That may not be a problem for your use case, though.