matsumotory / ngx_mruby

ngx_mruby - A Fast and Memory-Efficient Web Server Extension Mechanism Using Scripting Language mruby for nginx
https://ngx.mruby.org/
988 stars 112 forks source link

Concept of a connection pool and/or other hints about getting better Redis performance? #505

Open travisbell opened 1 year ago

travisbell commented 1 year ago

Hi @matsumotory, I've been spending some time benchmarking OpenResty's lua-resty-redis library vs. mruby-redis and have come across a specific and noticeable difference with one particular setting. The keep alive connection pool.

First, the data: when benchmarking a very simple single get call:

redis = Userdata.new("redis")
unless redis.client
  redis.client = Redis.new("127.0.0.1", 6379)
  redis.enable_keepalive
end

Nginx.echo(redis.client.get("test"))

Yields the following results:

~ wrk -c 200 -d 5 "http://127.0.0.1:8088/mruby"
Running 5s test @ http://127.0.0.1:8088/mruby
  2 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    43.30ms  120.60ms   1.46s    95.84%
    Req/Sec     4.54k   198.80     5.00k    83.33%
  46114 requests in 5.10s, 5.45MB read
Requests/sec:   9036.12
Transfer/sec:      1.07MB

Not bad. Let's look at Lua with a single connection (this is what I assume we're doing in mruby with the above code):

local ok, redis = pcall(require, "resty.redis")

local redis_client = redis:new()
local ok, error = redis_client:connect("127.0.0.1", 6379)
local key_lookup, error = redis_client:get("test")
if not key_lookup then
  ngx.log(ngx.ERR, error)
end
local ok, error = redis_client:set_keepalive(60000, 1)

ngx.say(key_lookup)

Results:

~ wrk -c 200 -d 5 "http://127.0.0.1:8088/lua"
Running 5s test @ http://127.0.0.1:8088/lua
  2 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    38.17ms   29.40ms 214.71ms   76.00%
    Req/Sec     2.86k     2.46k    7.20k    72.00%
  28451 requests in 5.01s, 5.21MB read
Requests/sec:   5678.51
Transfer/sec:      1.04MB

Ok, so far mruby is handling its business.

Now let's increase the Lua connection pool to 100:

local ok, redis = pcall(require, "resty.redis")

local redis_client = redis:new()
local ok, error = redis_client:connect("127.0.0.1", 6379)
local key_lookup, error = redis_client:get("test")
if not key_lookup then
  ngx.log(ngx.ERR, error)
end
local ok, error = redis_client:set_keepalive(60000, 100)

ngx.say(key_lookup)

And check the results:

~ wrk -c 200 -d 5 "http://127.0.0.1:8088/lua"
Running 5s test @ http://127.0.0.1:8088/lua
  2 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.71ms   15.71ms 357.67ms   97.61%
    Req/Sec    12.45k     1.88k   14.95k    83.00%
  123926 requests in 5.00s, 22.69MB read
Requests/sec:  24762.09
Transfer/sec:      4.53MB

Wowsa, that's quite the difference! Almost 25,000 rps.

So I have some obvious questions.

  1. Is there a concept of a connection pool with mruby-redis?
  2. If not, is there a way we can fake it with Fiber's or something else?

Thanks for taking the time to read through this, and if I'm doing something obviously wrong, make sure to let me know. 😄