hamishforbes / lua-resty-upstream

Upstream connection load balancing and failover module for Openresty
MIT License
117 stars 26 forks source link

Errors #7

Closed dant4z closed 10 years ago

dant4z commented 10 years ago

I configured primary and backup pools. like this:

if not configured then -- Only reconfigure on start, shared mem persists across a HUP
            api:create_pool({id = "primary", timeout = 1000, read_timeout = 30000, keepalive_pool = 256, keepalive_timeout = (120*1000)})

            api:create_pool({id = "backup", timeout = 1000, read_timeout = 30000, keepalive_pool = 256, keepalive_timeout = (120*1000)})

            api:set_priority("primary", 0)
            api:set_priority("backup", 10)

            api:add_host("backup", { host = "192.168.0.2", port = "80", weight = 10, healthcheck = true})
            api:add_host("primary", { host = "192.168.0.3", port = "80",  weight = 5, healthcheck = true})
end

And after several requests i have this errors in log:

2014/03/12 07:27:09 [error] 19483#0: [lua] http.lua:165: check_response(): closed from 1, context: ngx.timer
2014/03/12 07:27:09 [error] 19483#0: [lua] http.lua:165: check_response(): closed from 1, context: ngx.timer
2014/03/12 07:27:09 [error] 19484#0: [lua] http.lua:165: check_response(): closed from 1, context: ngx.timer
2014/03/12 07:27:09 [error] 19484#0: [lua] http.lua:165: check_response(): closed from 1, context: ngx.timer
2014/03/12 07:27:09 [error] 19484#0: [lua] socket.lua:196: post_process(): Host "1" in Pool "primary" is down, context: ngx.timer
2014/03/12 07:27:09 [error] 19484#0: [lua] socket.lua:196: post_process(): Host "1" in Pool "backup" is down, context: ngx.timer

I made this several requests from browser and all was seemed ok.

hamishforbes commented 10 years ago

Hi,

This error is coming back from the http background check. It looks like the connection to the host was closed either while sending the request or reading the response headers.

I've changed that error message a bit to make it clearer where that error message is coming from.

Thanks

dant4z commented 10 years ago

Thanks. Now i see this errors:

2014/03/13 05:58:54 [error] 23385#0: [lua] http.lua:165: check_response(): HTTP Request Error from host '1' in pool 'primary': closed, context: ngx.timer
2014/03/13 05:58:54 [error] 23384#0: [lua] http.lua:165: check_response(): HTTP Request Error from host '1' in pool 'primary': closed, context: ngx.timer
2014/03/13 05:58:54 [error] 23385#0: [lua] http.lua:165: check_response(): HTTP Request Error from host '1' in pool 'backup': closed, context: ngx.timer
2014/03/13 05:58:54 [error] 23384#0: [lua] http.lua:165: check_response(): HTTP Request Error from host '1' in pool 'backup': closed, context: ngx.timer
2014/03/13 05:59:54 [error] 23384#0: [lua] http.lua:165: check_response(): HTTP Request Error from host '1' in pool 'primary': closed, context: ngx.timer
2014/03/13 05:59:54 [error] 23385#0: [lua] http.lua:165: check_response(): HTTP Request Error from host '1' in pool 'primary': closed, context: ngx.timer
2014/03/13 05:59:54 [error] 23384#0: [lua] http.lua:165: check_response(): HTTP Request Error from host '1' in pool 'backup': closed, context: ngx.timer
2014/03/13 05:59:54 [error] 23384#0: [lua] socket.lua:196: post_process(): Host "1" in Pool "primary" is down, context: ngx.timer
2014/03/13 05:59:54 [error] 23384#0: [lua] socket.lua:196: post_process(): Host "1" in Pool "backup" is down, context: ngx.timer
2014/03/13 05:59:54 [error] 23385#0: [lua] http.lua:165: check_response(): HTTP Request Error from host '1' in pool 'backup': closed, context: ngx.timer
2014/03/13 05:59:54 [error] 23385#0: [lua] socket.lua:196: post_process(): Host "1" in Pool "primary" is down, context: ngx.timer
2014/03/13 05:59:54 [error] 23385#0: [lua] socket.lua:196: post_process(): Host "1" in Pool "backup" is down, context: ngx.timer

error is "closed". This mean that backend closed connection? But I don't see any errors in backend log. Backend is also nginx, but without lua.

hamishforbes commented 10 years ago

Hi,

Yep ok so this should only happen if resty-http returns nil from request after successfully connecting. Can you confirm you're using the latest version of lua-resty-http?

It also looks like you've got 2 background threads running as the background checks etc are being repeated by 2 different worker PIDs. That ... could be a bug, are you able to post a full version of your config file? with obfuscated names/IPs etc if needed.

dant4z commented 10 years ago

Now, I upgrade lua-resty-http to the latest version (from github) and i have 500 and this errors:

2014/03/13 07:03:28 [error] 25651#0: *26 lua entry thread aborted: runtime error: /usr/local/lib/lua/resty/upstream/http.lua:220: attempt to call method 'get_client_body_reader' (a nil value)
stack traceback:
coroutine 0:
    /usr/local/lib/lua/resty/upstream/http.lua: in function 'get_client_body_reader'
    /usr/local/openresty/nginx/lua/load-balancer.lua:10: in function </usr/local/openresty/nginx/lua/load-balancer.lua:1>, client: 66.249.78.47, server: example.com, request: "GET /blog/read/freestyle/1542836?c=7851801 HTTP/1.1", host: "example.com"

My nginx.conf:

worker_processes  4;
pid        /var/run/nginx.pid;
events {
    worker_connections  1024;
}
http {
    include       mime.types; default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    lua_shared_dict my_upstream 1m;
    lua_socket_log_errors off;
    lua_shared_dict upstream_stat 1m;
    lua_package_path "/usr/local/lib/lua/?.lua;;";
    init_by_lua '
        require("cjson")
        Upstream_Socket  = require("resty.upstream.socket")
        Upstream_HTTP = require("resty.upstream.http")
        Upstream_Api = require("resty.upstream.api")
        upstream, configured = Upstream_Socket:new("my_upstream")
        api = Upstream_Api:new(upstream)
        http_upstream = Upstream_HTTP:new(upstream)
        if not configured then -- Only reconfigure on start, shared mem persists across a HUP
        local stat = ngx.shared.upstream_stat
            api:create_pool({id = "primary", timeout = 1000, read_timeout = 30000, keepalive_pool = 256, keepalive_timeout = (120*1000)})
            stat:set("primary", 0)
            api:create_pool({id = "backup", timeout = 1000, read_timeout = 30000, keepalive_pool = 256, keepalive_timeout = (120*1000)})
            stat:set("backup", 0)
        api:set_priority("primary", 0)
        api:set_priority("backup", 10)
            api:add_host("backup", { host = "192.168.0.3", port = "80", weight = 10, healthcheck = true})
            api:add_host("primary", { host = "192.168.0.2", port = "80",  weight = 5, healthcheck = true})

        stat:set("192.168.0.2:80", 0)
            stat:set("192.168.0.3:80", 0)
       end
    ';

    init_worker_by_lua 'http_upstream:init_background_thread()';

    server {
        listen *.*.*.*:80;
        server_name example.com;

        location / {
            content_by_lua_file /usr/local/openresty/nginx/lua/load-balancer.lua;
            log_by_lua '
        local stat = ngx.shared.upstream_stat
                local host = ngx.ctx.host
                local pool = ngx.ctx.pool
        if host then
            if host.host and host.port then
                local host_key = host.host .. ":" .. host.port
                        stat:incr(host_key, 1)
            end
        end
        if pool then
                    stat:incr(pool.id, 1)
        end
                upstream:post_process()
            ';
        }

load-balancer.lua:

local ngx_log = ngx.log
local ngx_err = ngx.ERR
local flush = ngx.flush
local print = ngx.print
local req = ngx.req
local ngx_var = ngx.var
local str_lower = string.lower
local res_header = ngx.header

local client_body_reader, err = http_upstream:get_client_body_reader()
if not client_body_reader then
    if err == "chunked request bodies not supported yet" then
        ngx.status = 411
        ngx.say("411 Length Required")
        ngx.exit(ngx.status)
        return
    elseif err ~= nil then
        ngx_log(ngx_err, "Error getting client body reader: ", err)
    end
end

local headers = req.get_headers()
headers['X-Forwarded-For'] = ngx_var.remote_addr

local res, conn_info = http_upstream:request{
    method = req.get_method(),
    path = (ngx_var.uri .. ngx_var.is_args .. (ngx_var.args or "")),
    body = client_body_reader,
    headers = headers,
}

ngx.ctx.host = conn_info.host
ngx.ctx.pool = conn_info.pool

if not res then
    ngx.status = conn_info.status
    ngx.say(conn_info.err)
    return ngx.exit(ngx.status)
end

ngx.status = res.status

local HOP_BY_HOP_HEADERS = {
    ["connection"]          = true,
    ["keep-alive"]          = true,
    ["proxy-authenticate"]  = true,
    ["proxy-authorization"] = true,
    ["te"]                  = true,
    ["trailers"]            = true,
    ["transfer-encoding"]   = true,
    ["upgrade"]             = true,
}

for k,v in pairs(res.headers) do
    if not HOP_BY_HOP_HEADERS[str_lower(k)] then
        res_header[k] = v
    end
end

local reader = res.body_reader
if reader then
repeat
    local chunk, err = reader(65536)
    if err then
      ngx_log(ngx_ERR, "Read Error: "..(err or ""))
      break
    end

    if chunk then
      print(chunk)
      flush(true)
    end
until not chunk
end

local ok,err = http_upstream:set_keepalive()
hamishforbes commented 10 years ago

Thank you, I'll see if i can get this reproduced today

hamishforbes commented 10 years ago

Hi,

The multiple background workers issue should be fixed in master now. I can't reproduce your issue with client body reader, are you certain you're able to load and use lua-resty-http outside of upstream?

Thanks