Closed markuman closed 9 years ago
Hi yes, this is possible to do :).
You read appropriate amount of bytes and then send that off and wait until it has been sent before sending next chunk.
Open up web.lua and look at the StaticFileHandler which does exactly that.
But basically its something like this (in pseudo code);
self:set_chunked_write()
while (true) {
local chunk = f:read(some_bytes)
... if still data ...
self:write(chunk)
coroutine.yield (turbo.async.task(turbo.web.RequestHandler.flush, self))
}
self:finish()
Thank you, got it now.
self:add_header('Content-Type', 'application/octet-stream')
self:add_header('Content-Disposition', 'attachment; filename=' .. filename)
local f = assert(io.open(filename, "rb")) -- read binary
local current = f:seek() -- get current position
local size = f:seek('end') -- get file size
f:seek('set', current) -- restore position
self:add_header('Content-Length', size)
while true do
local chunk = f:read(1024 * 32) -- 32kb chunks
if not chunk then break end
self:write(chunk)
coroutine.yield (turbo.async.task(turbo.web.RequestHandler.flush, self))
end
f:close()
self:finish()
The Content-Length header is important too :)
Content length is required if you do not use chunked encoding as suggested. Otherwise your way works fine too. søn. 20. sep. 2015 kl. 09.13 skrev Markus Bergholz <notifications@github.com
:
Closed #231 https://github.com/kernelsauce/turbo/issues/231.
— Reply to this email directly or view it on GitHub https://github.com/kernelsauce/turbo/issues/231#event-414098307.
Mvh / Best regards John Abrahamsen Tlf/Phone: (+47) 941 35 009
This works fine. So
127.0.0.1:8000/download
will start transferingfile.zip
.But what when I don't want the static "download" address and want to start a transfer more dynamically.
Currently I'm doing
But this has a huge ram consumption when the transfering files are large. Any ideas how to handle this in a better way?