Closed ryanrhee closed 4 years ago
cc @missinglink
On my system, a size of 15871
works fine but 15872
fails. This happens to be when the buffer size gets big enough to trigger the highwatermark which then triggers the ._drain
mechanism, which in turn makes the callback not fire.
Hi @ryanrhee,
The backpressure mechanism is intended to only allow you to enqueue a maximum of HWM
bytes into RAM, prior to that PR you could enqueue as much as you liked.
If your large file was for example 100GB (more than your available RAM) then there are two options for handling it:
HWM
bytes and wait for a consumer before continuingSo I believe this is working correctly and the solution is simply to attach a sink to the stream?
// pipe the pack stream somewhere
pack.pipe(process.stdout)
Yep, this is backpressure kicking in. As mentioned about pipe it somewhere, or call .resume() on the stream
yup, you two are correct. thanks. closing.
Sending this PR of a failing test to demonstrate that adding an entry of sufficient size fails.
This test passes in the previous version but breaks after https://github.com/mafintosh/tar-stream/pull/114 .
Is there something different that needs to be done when the backpressure mechanism is activated and the callback is set to the
._drain
var? If so, that's not discussed in the documentation from what I can tell.