Add the in-memory rate limiter.
The rate limiter state should be checked/updated before any action like route recognition <- this should be the very first step in the request processing flow.
The rate limiter should recognize the client based on IP - the same approach as in HashLoadBalancerImpl.
This approach assumes that the load balancer standing in front of panda instances uses the hash-based algorithm - we already assumed this during HashLoadBalancerImpl implementation. In the future, the rate limiter should be moved to either MongoDB(probably not the best idea) or some external cache (e.g. Redis) in order to remove the requirement of a hash-based load balancer when 'panda' uses some different algorithm e.g. Round Robin. Keep this in mind during service API creation (it should fit well with the in-memory rate limiter and distributed one which we will implement in the future).
I'm not forcing any particular rate-limiting algorithm, let's have fun and implement something demanding. However, a bucket-based rate limiter may be a good starting point...
Consider creating a new module, as this functionality should be as independent as possible and there can be multiple rate limiter implementations.
Add the in-memory rate limiter. The rate limiter state should be checked/updated before any action like route recognition <- this should be the very first step in the request processing flow.
The rate limiter should recognize the client based on IP - the same approach as in
HashLoadBalancerImpl
.This approach assumes that the load balancer standing in front of panda instances uses the hash-based algorithm - we already assumed this during HashLoadBalancerImpl implementation. In the future, the rate limiter should be moved to either MongoDB(probably not the best idea) or some external cache (e.g. Redis) in order to remove the requirement of a hash-based load balancer when 'panda' uses some different algorithm e.g. Round Robin. Keep this in mind during service API creation (it should fit well with the in-memory rate limiter and distributed one which we will implement in the future).
I'm not forcing any particular rate-limiting algorithm, let's have fun and implement something demanding. However, a bucket-based rate limiter may be a good starting point...
Consider creating a new module, as this functionality should be as independent as possible and there can be multiple rate limiter implementations.