Quite important if we want to save large file. The point of this is that a browser can't hold so much RAM and I/We need a way to pipe it to the browsers Filesystem API
Now there is a browser stream api on the way and it would be really awesome if jszip could use this new cool api when it comes out.
In the meanwhile we could do something simple like the fetch API handles a stream read this: thats so fetch#streams
It would look something like this:
var writer = get_a_filesystem_writer_from_filesystem_api()
writer.seek(0);
function pump(reader){
reader.read().then(function(result){
if(result.done) return;
return new Promise(function(resolve, reject){
writer.onwriteend = function(){
resolve(pump(reader));
}
// result.data is a ArrayBuffer
writer.write(result.data);
});
});
}
var stream = zip.generate({ type: "readableStream" });
pump( stream ).then(function(){
console.log("done");
})
I manage to request a really large file with Fetch API and piped it to the FileSystem API without the RAM ever going up so much.
But then I just changed the 4 last lines with this:
Quite important if we want to save large file. The point of this is that a browser can't hold so much RAM and I/We need a way to pipe it to the browsers Filesystem API
Now there is a browser stream api on the way and it would be really awesome if jszip could use this new cool api when it comes out.
In the meanwhile we could do something simple like the fetch API handles a stream read this: thats so fetch#streams It would look something like this:
Or choose to implement the pollyfill if needed
I manage to request a really large file with Fetch API and piped it to the FileSystem API without the RAM ever going up so much. But then I just changed the 4 last lines with this: