Open codeasashu opened 2 months ago
To prevent further questions on why I am not using nginx's built-in rate limiting capability: I can't.
Imagine my application is running on EC2, which are in autoscaling groups. Each of these EC2 instances have nginx. Even if I try to use nginx's built-in rate limiter, they suffer from following drawbacks:
Hence, to save efforts of somebody to implement atomicity and consensus, DiceDB can support it specifically for this command from the beginning. People can then just plug-in DiceDB directly to 1000s of nginx, without having to worry about atomicity etc.
@arpitbbhayani Inspite of rate limiter idea, I want to work on implementing consensus into this. One thing is most definitely happening: multiple instances of DiceDB are going to run anyways. Sometimes in sharding scheme, sometimes in replication mode.
When used in replication mode, all nodes needs to agree on a written value. This will allow DiceDB to be used in many forms. I am interested in implementing a simple consensus protocol. What do you say?
@codeasashu @arpitbbhayani I would also like to contribute to it.
One key aspect I want to see in Dice is the rate limiting. Since I know this is likely going to get rejected, I want to put this out anyway. My use-case is designing a open source rate limiter service built-in with a cache backend. There are tons of rate limiters out there, however, they are mostly tied to a particular api gateway. Hence, we developers are mostly dependent on one. Pluggable rate limiters exists, however they mostly work on application level (which is usually behind a load balance), voiding the benefits of rate limiter itself.
Feature Description
Add a "RTL" command in DiceDB, to support a basic rate limiter. The basic idea is to tie a user defined identity (eg: api key) with a rate limiting policy using certain algorithm. Each of the components are described below.
Components
1. ALGO
This can be one of the many rate limiting algorithms supported (eg: token bucket, leaky bucket, sliding window etc.). For now, we can start with the popular token bucket, with future support to many such algorithms.
2. POLICY
The limiter policy to be applied on the given key. These are synonymous to "Usage plan" in terms of API Gateways. The policy will indicate the rate limiter algorithm (token bucket for instance), along with the rate parameters. A simple example can be:
A very simpler version of policy can be stringified to
2/s
or1000/d
, assuming thetoken_bucket
is the default algorithm, using the abbreviations:s = seconds, d = days, h = hours, m = minutes
.3. IDENTIFIER
The identifier is the key component of this application. A rate limiter is not necessarily meant for HTTP requests (as is the assumption these days). I am more inclined towards "controlling the rate of packet flow". For instance, I can use it to control the rate of SIP packets flowing into my system, by feeding the SIP identifier from it to this limiter. Similarly, if I know my identifier speaking any protocol over TCP, I can have it limited. It is upto the user to choose the identifier. For brevity, I will keep my examples limited to HTTP, for this to make more sense, but the idea can be generalised to almost any TCP speaking protocol.
The identifier is composed of 3 parts for now:
<protocol>:<identifier name>:<identifier value>
. For http, theprotocol=http
works. DiceDB can assume howidentifier name
andidentifier value
are pulled based one theprotocol
.Command usage
Assuming my api key header is
abcdef
, andContent-Type
isapplication/json
:Limit this api key to 100 requests per day:
Limit this api key to 1000 requests per hour with a burst limit of 100,000 per day:
If api key tied to multiple policies, the latest one overwrites the previous.
If multiple limit exists, multiple policies will be applied (i.e. the one which empties the bucket first, wins). In following example, all the requests carrying json content type will be limited to
100/day
. The one carrying both the matching api key and the json content type, will exhaust the bucket (assuming token bucket algo) and will be limited if more than 100 requests landed within a second.Client usage
The set policies can easily be used at application level. A basic usage can be:
This will respond with 1 or 0. 1 meaning the request is allowed, 0 meaning otherwise.
Key aspects
1. Data storage
We can store the key and values in binary form for better efficiencies.
2. Consensus protocol
If this works out, and since this project is based on shared-nothing architecture, we can use some form of consensus protocol (eg: raft, paxos) to scale rate limiter to any level. Golang has quite good support for it, thanks to hashicorp guys.