Open kchsieh opened 7 years ago
Seems like you misunderstand rate limiting. Your shell command simply issues requests serially and as fast as possible. The number of rejected requests have no direct relationship with your burst setting. Because you also have to take into account of the speed of your server serving the request and your client's own overhead.
@agentzh I don't believe the performance of my desktop is the issue. Fyi, I am using my desktop as both server and client. Your document stated:
-- limit the requests under 200 req/sec with a burst of 100 req/sec,
-- that is, we delay requests under 300 req/sec and above 200
-- req/sec, and reject any requests exceeding 300 req/sec.
So my setting of:
local lim, err = limit_req.new("my_limit_req_store", 20, 10)
Should just delay the 20th to 30th requests, it doesn't seems to be the case.
No, your understanding of req rate is wrong. It is also required to fulfill the rate within a single second. The time unit is irrelevant to the rate.
The rate defines the time interval constraint between every two successive requests. It is not calculated by the seconds at all.
Okay, I am really confused now. Just to be complete, I am pasting the access.log below to show where all 31 hits are registered within the same second:
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
Maybe I should rephrase my original question. I am suspecting the documentation is stated incorrectly. If you set it to be:
local lim, err = limit_req.new("my_limit_req_store", 20, 30)
Then the behavior should match what was stated in the documentation.
rate=20 & burst=30 means that it will only reject requests exceeding the 50 r/s rate.
I suggest you take a closer look at the leaky bucket algorithm used by both this Lua library and nginx's standard ngx_limit_req module.
@kchsieh The access logs make little sense here since we check at the level of milliseconds while the timestamps in the access logs are only at the seconds precision.
@agentzh Just want to confirm if my understanding about the usage of resty.limit.req
is correct.
I just roughly went through the wiki page of leaky bucket
and also ngx_limit_req
's documentation. And it seems the resty.limit.req
is using a different method than ngx_limit_req
to determine if a request conforms the defined limit.
In ngx_limit_req
's case, parameter delay + 1
defines the size of the bucket, and parameter rate
defines the rate in which water leaks from the bucket. All the new water drops (i.e. requests) will not be taken when the bucket is full, i.e. with the bucket already having delay + 1
water drops.
For the test output here, I'm trying to understand why the second request was rejected. Seems in resty.limit.req
's case, there is no bucket size
, and it determines whether or not to accept the water drop by examining the rate that it arrives, e.g. by checking the interval with the previous water drop. If the rate it arrives exceeds rate + burst
, it will be discarded.
Is there understanding correct? Or there is another way to explain this test output?
When I use the documented burst setting in resty.limit.req, but it appears higher number of request were rejected (503). For example when I set the burst at 10 like this:
Then I hit it with:
The result has 19 rejected requests with 503 status code below, is the burst number documented incorrectly?