Closed lewiji closed 8 years ago
You found the right place that helps with this problem, and it's not a hack. child_process.exec
will kill its child process when it pours more data (in bytes) than allowed into stdout or stderr (documented here). And luckily, unzip
lists every file it uncompressed on stdout by default and this causes its own explosion.
But I bet your manually increasing maxBuffer
solution would one day fail on another's much larger zip file.
I'll just redirect stdout to /dev/null
when executing unzip
. It's not until now that I realize why people append > /dev/null
for some commands.
I solved this in b268b6b602990eefdf4ca9f54d93ce56bb100bf9. Take a look. Bug is expected to disappear in 1.2.1.
@lewispollard need your help to test 1.2.1 on your large zip.
That's great, can confirm 1.2.1 fixes the problem! Thanks for your help
Difficult to narrow down to a specific limit case, but on my server there is a project with a lot of individual files (2000+). Unfortunately due to NDA I can't supply this zip as an example. During the unzip process there would be an error in the docker logs for the image - the unzip would fail, and the error would be too large for the stdout buffer, giving the error 'Error: stdout maxBuffer exceeded.'. The directory for the files would be created, but would be empty despite a 'success' on the build output.
Wanting to view the error caused by the unzip command, I increased the maxBuffer by modifying deployer.js:
exec('unzip ' + artifactPath + ' -d ' + tempDestination, {maxBuffer: 1024 * 5000}, (err, stdout, stderr) => ...
Thinking that I could then dig into the actual error. To my surprise, this completely fixed the problem and the files unzipped into the right place without problem. But it seems like a hack (obviously an extreme value for an output buffer) and I'm not sure why the unzip was failing with the default maxBuffer setting.