vutran1710 / PyrateLimiter

⚔️Python Rate-Limiter using Leaky-Bucket Algorithm Family
https://pyratelimiter.readthedocs.io
MIT License
338 stars 36 forks source link

Seeing really high seconds remaining #45

Closed yawhide closed 2 years ago

yawhide commented 2 years ago

Hello again,

Using 2.3.6, I am seeing some really times to delay. Here is some debug info from the logs:

# err.meta_info
{"message": "{'error': 'Bucket for _get_transfers with Rate 15/1 is already full', 'identity': '_get_transfers', 'rate': '15/1', 'remaining_time': 1609378791.1727977}", "time": "2021-09-28T20:27:05.257042"}
# the other log you get when a limit is reached
{"message": "Rate limit reached; 1609378791.1727977 seconds remaining before next request", "time": "2021-09-28T20:27:05.257395"}

what is the best approach to resolve this when I am using freezegun's freeze_time decorator in my test cases?

EDIT: this is really weird. so i have a series of test cases. some test cases had the freezegun freeze_time but those were not the tests that caused the problem. it was the later test cases where i saw this crazy log.

the second thing that is weird is, i create a new instance of a class that has a limiter as an instance variable. is the library somehow persisting my rate limit in memory even though i create new instances of my class? You can see in the first log, i have an identity set. But since I am using an in memory bucket, i assumed creating two instances of limiter would use separate rate limit buckets. My hunch is yes, the library has a global cache. Is there a method to reset this cache (or in memory bucket) after/before each test case?

JWCook commented 2 years ago

Hmm, this could be related to my changes for #40, or it could be something specific to your tests. Do you get similar results with 2.3.4?

And are you by chance using a mock date of December 31, 2020? That's what 1609378791 is (plus some change), in Unix time. If freezegun mocked the time, an item was added to the bucket, and then that item persisted to the next test that didn't have a mocked time, then you could get a weird result like that.

There's no caching going on under the hood, it's just a queue.Queue: https://github.com/vutran1710/PyrateLimiter/blob/25cb5945a51e767880bb3cef7f5655ca65269450/pyrate_limiter/bucket.py#L59-L83

Creating a new Limiter will create a new bucket object, so I don't think it's possible to persist bucket data to a new object unless you explicitly copied it. Or there could be a bug there that I'm just not seeing right now. Either way I think we'll need to see a full code example to figure this one out.

yawhide commented 2 years ago

Hmm, this could be related to my changes for #40, or it could be something specific to your tests. Do you get similar results with 2.3.4?

And are you by chance using a mock date of December 31, 2020? That's what 1609378791 is (plus some change), in Unix time. If freezegun mocked the time, an item was added to the bucket, and then that item persisted to the next test that didn't have a mocked time, then you could get a weird result like that.

There's no caching going on under the hood, it's just a queue.Queue:

https://github.com/vutran1710/PyrateLimiter/blob/25cb5945a51e767880bb3cef7f5655ca65269450/pyrate_limiter/bucket.py#L59-L83

Creating a new Limiter will create a new bucket object, so I don't think it's possible to persist bucket data to a new object unless you explicitly copied it. Or there could be a bug there that I'm just not seeing right now. Either way I think we'll need to see a full code example to figure this one out.

i used jan 1 2021 as the mocked date. might be an issue with how I setup my test cases. i removed the usage of freezegun and the issue went away.

JWCook commented 2 years ago

Maybe give time-machine a try? It's similar to freezegun but with a slightly different mocking mechanism. Or maybe try using freezegun.freeze_time/start/stop instead of the decorator and see if that changes anything?