vutran1710 / PyrateLimiter

⚔️Python Rate-Limiter using Leaky-Bucket Algorithm Family
https://pyratelimiter.readthedocs.io
MIT License
349 stars 36 forks source link

Creating multiple limiter objects reuses the same backet_group dictionary and does not create multiple Redis buckets #32

Closed andreicha closed 3 years ago

andreicha commented 3 years ago

Hey there.

I think you have a bug in your code. Please consider the following scenario:

` rate = RequestRate(1, 1 * Duration.SECOND)

limiter1 = Limiter(
    rate,
    bucket_class=RedisBucket,
    bucket_kwargs={"redis_pool": pool, "bucket_name": "awesome-bucket-1"},
)

limiter2 = Limiter(
    rate,
    bucket_class=RedisBucket,
    bucket_kwargs={"redis_pool": pool, "bucket_name": "awesome-bucket-2"},
)

item = "redis-test_item"

limiter1.try_acquire(item)
limiter2.try_acquire(item)`

I set the rate as 1 per second, create 2 limiters with different bucket names. Then I send the same item to both limiters. It should work, but the second item sent to the limiter2 causes BucketFullException.

This behaviour is consistent while using MemoryQueue, Fake Redis or real Redis.

I actually identified the cause of the error and there is a simple fix.

The problem is in your Limiter class:

`class Limiter: """Basic rate-limiter class that makes use of built-in python Queue"""

bucket_group: Dict[str, AbstractBucket] = {}

def __init__(`

Since the bucket_group is declared and instantiated outside of init it gets reused from instance to instance.

A simple fix is to initialize bucket_group in the init block like so:

def __init__( self, *rates: RequestRate, bucket_class: Type[AbstractBucket] = MemoryQueueBucket, bucket_kwargs=None, ): .... self.bucket_group = {} # <-- Fixes the issue

There is also a workaround -- once a Limiter object is created, calling bucket_group = {} works as a workaround.

Sincerely, Andrei

vutran1710 commented 3 years ago

Oh yeah much thanks for your help! That's why we do open-source right. I'm gonna get on this soon!

andreicha commented 3 years ago

Happy to help! :)

vutran1710 commented 3 years ago

Fixed in 2.3.4

andreicha commented 3 years ago

Thank you!