Open nikhilrakuten opened 1 year ago
10 workers?
@toredash no. its 2 workers. I have verified the nginx.conf and with top command during the process.
I've just created a new rewrite service for Nginx using redis, which performs 200% faster than native Nginx rewrites, especially with 20k+ rewrite rules. Now the dot over the i here, is to pool my connections, which is why I've stress tested it with 3.000 requests pr second, which after 1 minut meant there was 24.000 tcp connection on local host....
local redis_port = 6390
local redis_database = 1
-- connection timeout for redis in ms. don"t set this too high!
local redis_connection_timeout = 1000
local redis = require "nginx.redis";
local redis_client = redis:new();
-- 300 ms timeout
redis_client:set_timeouts(300, 300, 300);
redis_client:set_timeout(300);
local ok, err = redis_client:connect(redis_host, redis_port, { pool_size = 10, backlog = 10});
if not ok then
ngx.log(ngx.DEBUG, "Redis connection error while retrieving rewrite rules: " .. err);
end
-- select specified database
redis_client:select(redis_database)
local function pool_redis_connection()
-- put it into the connection pool of size 10,
-- with 50 seconds max idle time
local ok, err = redis_client:set_keepalive(5000, 10)
if not ok then
ngx.log(ngx.EMERG, "Redis failed to set keepalive: ", err)
end
end
using 10 workers, the pool_redis_function is called before a redirect or ending lua_rewrite_by_file.
@nikhilrakuten we need a nginx.conf that can reproduce this issue.
@zhuizhuhaomeng nginx conf file is updated.
I am using https://github.com/openresty/lua-resty-redis#connect method and keepalive() i have kept pool size to 200 and backlog 20, Still it creates 2000 connections to redis during a load test.
is_connected, err = client:connect(REDIS_SERVER1,REDIS_PORT,{ pool_size = 200, backlog = 10})
ngx-lua : 5.1 workers : 2
What could be the reason for more number of connection from nginx to redis ? How can we control or restrict it ?
Used Nginx conf
Steps To Reproduce: