DiceDB / dice

DiceDB is a redis-compliant, reactive, scalable, highly-available, unified cache optimized for modern hardware.
https://dicedb.io/
Other
6.84k stars 1.08k forks source link

Feat: Adding rate limiter command #519

Open codeasashu opened 2 months ago

codeasashu commented 2 months ago

One key aspect I want to see in Dice is the rate limiting. Since I know this is likely going to get rejected, I want to put this out anyway. My use-case is designing a open source rate limiter service built-in with a cache backend. There are tons of rate limiters out there, however, they are mostly tied to a particular api gateway. Hence, we developers are mostly dependent on one. Pluggable rate limiters exists, however they mostly work on application level (which is usually behind a load balance), voiding the benefits of rate limiter itself.

Feature Description

Add a "RTL" command in DiceDB, to support a basic rate limiter. The basic idea is to tie a user defined identity (eg: api key) with a rate limiting policy using certain algorithm. Each of the components are described below.

Components

1. ALGO

This can be one of the many rate limiting algorithms supported (eg: token bucket, leaky bucket, sliding window etc.). For now, we can start with the popular token bucket, with future support to many such algorithms.

2. POLICY

The limiter policy to be applied on the given key. These are synonymous to "Usage plan" in terms of API Gateways. The policy will indicate the rate limiter algorithm (token bucket for instance), along with the rate parameters. A simple example can be:

policy_1:
  algorithm: token_bucket 
  allowed_rate:  # allows 100 req/second using token bucket algo
    value: 100
    unit: second # A enum (allowed values: second, minute, hour, day)
  burst_rate:  # optional for now
    value: 150
    unit: second
  identifier: api_key

A very simpler version of policy can be stringified to 2/s or 1000/d, assuming the token_bucket is the default algorithm, using the abbreviations: s = seconds, d = days, h = hours, m = minutes.

3. IDENTIFIER

The identifier is the key component of this application. A rate limiter is not necessarily meant for HTTP requests (as is the assumption these days). I am more inclined towards "controlling the rate of packet flow". For instance, I can use it to control the rate of SIP packets flowing into my system, by feeding the SIP identifier from it to this limiter. Similarly, if I know my identifier speaking any protocol over TCP, I can have it limited. It is upto the user to choose the identifier. For brevity, I will keep my examples limited to HTTP, for this to make more sense, but the idea can be generalised to almost any TCP speaking protocol.

The identifier is composed of 3 parts for now: <protocol>:<identifier name>:<identifier value>. For http, the protocol=http works. DiceDB can assume how identifier name and identifier value are pulled based one the protocol.

Command usage

Assuming my api key header is abcdef, and Content-Type is application/json:

  1. Limit this api key to 100 requests per day:

    RTL http:apiKey:abcdef 100/d
  2. Limit this api key to 1000 requests per hour with a burst limit of 100,000 per day:

    RTL http:apiKey:abcdef100/d 100000/d
  3. If api key tied to multiple policies, the latest one overwrites the previous.

    RTL http:apiKey:abcdef 100/d
    RTL http:apiKey:abcdef 1000/s
  4. If multiple limit exists, multiple policies will be applied (i.e. the one which empties the bucket first, wins). In following example, all the requests carrying json content type will be limited to 100/day. The one carrying both the matching api key and the json content type, will exhaust the bucket (assuming token bucket algo) and will be limited if more than 100 requests landed within a second.

RTL http:Content-Type:application/json 100/d
RTL http:apiKey:abcdef 1000/s

Client usage

The set policies can easily be used at application level. A basic usage can be:

RTL LIMIT  http:apiKey:abcdef

This will respond with 1 or 0. 1 meaning the request is allowed, 0 meaning otherwise.

Key aspects

1. Data storage

We can store the key and values in binary form for better efficiencies.

2. Consensus protocol

If this works out, and since this project is based on shared-nothing architecture, we can use some form of consensus protocol (eg: raft, paxos) to scale rate limiter to any level. Golang has quite good support for it, thanks to hashicorp guys.

codeasashu commented 2 months ago

To prevent further questions on why I am not using nginx's built-in rate limiting capability: I can't.

Imagine my application is running on EC2, which are in autoscaling groups. Each of these EC2 instances have nginx. Even if I try to use nginx's built-in rate limiter, they suffer from following drawbacks:

  1. They are Consensus less (each instance won't know the other's limits)
  2. They are non-atomic. Anybody can plugin redis and use lua in nginx to build a rate limiter itself. But its just hard, since extra care has to be take to ensure atomicity of operation. Consensus is still not the part of it.

Hence, to save efforts of somebody to implement atomicity and consensus, DiceDB can support it specifically for this command from the beginning. People can then just plug-in DiceDB directly to 1000s of nginx, without having to worry about atomicity etc.

codeasashu commented 2 months ago

@arpitbbhayani Inspite of rate limiter idea, I want to work on implementing consensus into this. One thing is most definitely happening: multiple instances of DiceDB are going to run anyways. Sometimes in sharding scheme, sometimes in replication mode.

When used in replication mode, all nodes needs to agree on a written value. This will allow DiceDB to be used in many forms. I am interested in implementing a simple consensus protocol. What do you say?

jainbhavya53 commented 1 month ago

@codeasashu @arpitbbhayani I would also like to contribute to it.