Open nammehra opened 2 years ago
Please share a minimal code example that others can use to replicate your problem. If I had to guess, you are calling set_keepalive
before you read the whole body?
set_keepalive
is poorly named. It means "I am finished with this connection, please try to place it on the keepalive pool for me". The error you are receiving is to be expected if you have not yet read the response body.
Hi @pintsized , Below is the code snippet for connect and proxy_response method
function connect()
local httpc = http.new()
httpc:set_timeout(70000)
local ok, err = httpc:connect(options)
if not ok then
ngx.exit(ngx.HTTP_SERVICE_UNAVAILABLE);
else
logger.log(ngx.ERR, "message");
end
local res, err_pr = httpc:request(req_params)
if err_pr then
logger.log(ngx.ERR, "message");
ngx.exit(ngx.HTTP_BAD_GATEWAY);
else
logger.log(ngx.ERR, "message");
end
if res then
logger.log(ngx.DEBUG, "Response is non-nil")
end
proxy_response(res, nil)
local return_code, err_ka = httpc:set_keepalive(60000, 50);
if err_ka then
logger.log(ngx.ERR, "message")
if return_code then
logger.log(ngx.ERR, "Setting keepalive failed with error code: ", return_code);
end
end
function proxy_response(response, chunksize) if not response then ngx.log(ngx.ERR, "no response provided") return end
ngx.status = response.status
-- Filter out hop-by-hop headeres
for k, v in pairs(response.headers) do
local lower_k = string.lower(k)
if not HOP_BY_HOP_HEADERS[lower_k] then
ngx.header[k] = v
end
end
local reader = response.body_reader
repeat
local chunk, ok, read_err, print_err
chunk, read_err = reader(chunksize)
if read_err then
ngx.log(ngx.ERR, read_err)
end
if chunk then
ok, print_err = ngx.print(chunk)
if not ok then
ngx.log(ngx.ERR, print_err)
end
end
if read_err or print_err then
break
end
until not chunk
end
The problem is when we hit an api to download a file the api returns 200 ok immediately irrespective of whether the file is downloaded or not. So incase of small files it works fine , but fails for file more than 1.4~4mb in size.
Error in upstream server: epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too (104: Connection reset by peer) while reading upstream,
Please let me know how can handle such scenarios.
Thanks, Namita
This kind of thing can happen if the server is sending a non-compliant response - look for things like a bad Content-Length
header (wrong length), or an absent one. Or maybe you expect chunked transfer encoding but are somehow using HTTP 1.0 accidentally. Things of that nature.
Please be aware that this library is reasonably well battle tested. It runs large amounts of reverse proxy traffic in real word scenarios. So it is unlikely to be broken for well formed responses.
At the same time, please be aware that it is designed as a relatively low level HTTP driver for OpenResty cosockets. It offers some sanity checking where it can, but a more fully fledged HTTP client (like cURL) would be doing a lot more.
In other words, you should be able to expect it to work correctly in compliant scenarios. If something is not working, the first step should be to check your HTTP compliance, end-to-end. In your case, I would focus on determining which of the "body reader" paths it takes. The options are essentially:
For large responses, you really want the second one.
@pintsized , We are using chunked transfer encoding to transfer the response. Also when i run in my local it works fine, but we see above issue when the application is deployed behind AWS ALB. Can ALB be an issue, is there any known issues with respect to ALB?
our deployment setup:
Browser >>>> AWS ALB >>>> nginx server using connect/proxy_response method to connect further (deployed in AWS) >>> another nginx server using proxy_pass to connect further (deployed in another datacenter outside of AWS) >>> actual application server.
** Note: ALB uses HTTP2(tried changing to HTTP1.1, but didn't work). Both nginx server uses HTTP1.1.
Thanks, Namita
Can you post the response headers you receive before trying to read the body? You can omit anything application specific / sensitive.
Response Headers from second nginx server: { "resp-headers": { "cache-control": "no-cache, no-store, must-revalidate", "connection": "keep-alive", "content-disposition": "attachment;filename=oneninefive.csv", "content-language": "en-US", "content-security-policy": "default-src * 'unsafe-inline' 'unsafe-eval'", "content-type": "application/octet-stream;charset=ISO-8859-1", "expires": "-1", "pragma": "no-cache", "set-cookie": "REQUEST_TOKEN_KEY=3256901981166649212; Path=/abc; Secure; HttpOnly", "strict-transport-security": "max-age=31536000;includeSubDomains", "transfer-encoding": "chunked", "x-content-type-options": "nosniff", "x-frame-options": "SAMEORIGIN", "x-proxy_host": "abc/bulkfileuploadFLDownloadSelected.do?recCnt=7&colCnt=2", "x-xss-protection": "1; mode=block" } }
Response from 1st nginx server: { "resp-headers": { "cache-control": "no-cache, no-store, must-revalidate", "connection": "keep-alive", "content-disposition": "attachment;filename=oneninefive.csv", "content-language": "en-US", "content-security-policy": "default-src * 'unsafe-inline' 'unsafe-eval'", "content-type": "application/octet-stream;charset=ISO-8859-1", "date": "Fri, 15 Jul 2022 11:41:46 GMT", "expires": "-1", "pragma": "no-cache", "set-cookie": "RequestToken.REQUEST_TOKEN_KEY=3256901981166649212; Path=/abc; Secure; HttpOnly", "strict-transport-security": [ "max-age=31536000;includeSubDomains", "max-age=31536000; includeSubDomains" ], "trackingid": "a67cdaf1-0e58-40b6-a068-4f0234c63602", "transfer-encoding": "chunked", "x-content-type-options": "nosniff", "x-frame-options": "SAMEORIGIN", "x-proxy-host": "abc/bulkfileuploadFLDownloadSelected.do?recCnt=7&colCnt=2", "x-xss-protection": "1; mode=block" } }
Hi Team, I am unable to download large file from upstream server and getting unread data in buffer while setting the keepalive time. Please can someone help in fixing this issue.