openresty / lua-resty-limit-traffic

Lua library for limiting and controlling traffic in OpenResty/ngx_lua
819 stars 150 forks source link

limit traffic based on rolling-counter/sliding-window using ngx.shared.DICT #12

Closed alonbg closed 1 year ago

alonbg commented 7 years ago

Thought of something like limit_req.new(<dict_name>, rate, window, resolution) e.g. limit_req.new(, 200,300,60) That is, limit the requests under 200 req in a window of 300 seconds where the window resolution is 60 seconds.

Thought of using ffi similar to lua-resty-limit-traffic ( with the same (small) race-condition window :). The ffi struct shall contain a queue, it's length (max length == window/resolution), list values sum and a 64bit last timestamp. Each time limit_req.incoming() is called the list is adjusted (drop oldest element if now() - 'last' > resolution and recalculate sum) and so forth .. eventually return reject if sum > rate or simply increment sum.

The other option could be utilizing the newly added feature: C API for 3rd-party NGINX C modules to register their own shm-based data structures for the Lua land usage. Seems reasonable? Does it worth the hassle?

Would like to get some feedback before I start. Thanks

agentzh commented 7 years ago

Well, ngx_http_lua already supports the init parameter of incr, and we can utilize it here in this library.

I don't want this library to depend on a new 3rd-party nginx C module. But you are free to do whatever new feature experiments yourself :)

alonbg commented 7 years ago

@agentzh, sure, I would prefer not to reinvent stuff :) But everything (logic under limit_req.incoming() as I described), preferably, should take place within a ngx_shmtx_lock section. For sliding window I do need the queue. Did you mean adding this functionality to shared_dict list api ?

agentzh commented 7 years ago

@alonbg the standard lua_shared_dict implementation in ngx_http_lua_module already supports lists (or queues). Check it out to see if it fits your needs.

alonbg commented 7 years ago

@agentzh, for my case, using lua_shared_dict lists might cause a race condition. Well, I can use the lua lock API but there would be an overhead if used for every request. Actually, better to think of it as a special kind of rolling incr, which is initialized with window, resolution and an optional max value (being the max sum of all queue elements ). The data structure behind is the queue and the timestamp. I think I'll go forward with the c-api separate modul option and if it's generic enough you could consider it later on for ngx_http_lua_model. Thanks