Closed TimWolla closed 4 years ago
Also: The chunked
should probably be stripped from the returned transfer-encoding
value if dechunking happens.
Interesting, the ever expanding scope of an http client library. I'll try to fix this.
Tim, parsing of the chunked responses should be fixed in 07d6fff, I've tested it with httpbin.org (stream and stream-bytes endpoints).
We collect the whole response payload during parsing of chunks (if necessary, we can add generator variant for returning chunks) The returned headers are kept as is, however, you can easily construct new response on the client side, e.g. for a Lua service:
core.register_service("test", "http", function (applet)
local r, err = http.get {
url = "http://127.0.0.1:8081/stream/100"
}
http.response.create{status_code=200, content=r.content}:send(applet)
end)
Please let me know if this works for you now.
Please let me know if this works for you now.
My only use case for this library does not require reading the body. The issue just sprung into my eye while I was investigating #6 and I thought I'd report it if I already happen to notice it.
I did not test, but your commit looks good to me.
The returned headers are kept as is,
I'm not sure this is a good idea. By dechunking the body and leaving the chunked
within the transfer-encoding
header you are basically lying to the consumer, which can't simply use the existing transfer-encoding
header, because it does not match the body string. OTOH if any transfer encoding other than chunked
is present then chunked
must also be present. Not sure if there is any good solution, maybe the response table should have separate values for body
and rawBody
?
For me and my use cases this is a hypothetical discussion, though. As mentioned above I don't use the body.
IMO, haproxy-lua-http as a http client should not blindly relay the response on the HAProxy client side (otherwise, we'd just use HAProxy with ACLs). Here, for convenience, we collect the whole response body, but leave a chunked response header so we know that the response was chunked in the first place.
Consuming really big responses would not be convenient with Lua, so I'll add optional parameter to receive_chunked(socket, collect_all=true), so one can consume the response body chunk by chunk. We still need to expose that to http.get() But that's for another day :-)
It appears that the length of a received chunk is completely ignored and instead it is assumed that a chunk will consist of a single line, which clearly is incorrect for any kind of binary data (e.g. a gzip stream).