openresty / lua-resty-redis

Lua redis client driver for the ngx_lua based on the cosocket API
1.9k stars 448 forks source link

lua tcp error #68

Closed RyouZhang closed 1 year ago

RyouZhang commented 8 years ago

13561#0: *31675425 lua tcp socket read timed out, client: 100.97.177.53, server: localhost

lua redis的链接是否使用连接池?如果同时并发数量过高是否有排队功能?

agentzh commented 8 years ago

@RyouZhang Please, no Chinese here. This place is considered English only. If you really want to use Chinese, please join the openresty (Chinese) mailing list instead. Please see https://openresty.org/#Community for more details.

Regarding your questions,

  1. lua-resty-redis does enable connection pooling if you call the set_keepalive method everytime you finish using the current redis object (ensure you always check the return values of this method call to handle any errors properly). See the official documentation for more details. However, the connection pool is not used by default.
  2. There's no automatic queueing support based on the size of the connection pool though this is a planned feature that will get implemented soon. In the meantime, you can consider using the lua-resty-limit-traffic library to queue your backend requests before reaching lua-resty-redis.
RyouZhang commented 8 years ago

thanks Another question, why the lua socket getreusedtimes alway return nil? And I have call the setkeepalive method everytime I finish using.

agentzh commented 8 years ago

@RyouZhang Then your set_keepalive method call may always return a failure. Have you checked its return values?

RyouZhang commented 8 years ago

sometimes, the set_keepalive return nil, I think it reached the max pool size.

agentzh commented 8 years ago

@RyouZhang You can get the string describing the error in the second return value (when the first one is nil). Let's stop guessing :)

RyouZhang commented 8 years ago

OK, you are right, return nil, the error like this

1004613 lua tcp socket read timed out, client: 192.168.0.192, server: localhost, request: "GET /req HTTP/1.1", host: "192.168.0.192:8080" 2015/10/26 12:06:30 [error] 308#0: 1004613 [lua] gdm.lua:114: SetKeepalive(): closed, client: 192.168.0.192, server: localhost, request: "GET /req HTTP/1.1", host: "192.168.0.192:8080"

agentzh commented 8 years ago

@RyouZhang Okay, so your redis connection is already closed right before calling set_keepalive (like due to an earlier explicit close call or some previous method calls triggering a fatal error, like timeout).

agentzh commented 8 years ago

@RyouZhang Maybe you are just using a too small value for your timeout threshold of the redis connections?

sylarXu commented 8 years ago

As you said, there's no automatic queueing support based on the size of the connection pool though, I want to know ,whether it will create a new connection when pool->cache is empty? I mean the ngx_tcp_sock:connect,not only in redis.

agentzh commented 8 years ago

@sylarXu Yes.

agentzh commented 8 years ago

@sylarXu Same as the standard connection pool in the nginx core:

http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive