When the called command has large output, and called with TEE mode, some the of the output is lost.
See these 2 files, for example:
long_output.py:
#!/usr/bin/python
for i in xrange(4000):
print i
And problem.py:
#!/usr/bin/python
import plumbum
from plumbum import TEE
exe = plumbum.local["./long_output.py"]
ps = exe & TEE
print "\n\n================================\n\n"
(retcode, stdout, stderr) = ps
with open('problem.log', 'w') as f:
f.write(stdout + '\n')
Every time I run it, the outcome is a little bit different. But on all occasions I don't get all the 4000 lines that long_output.py produces.
I assume that some internal buffer holding stdout just get overflowed.
This could be solved by adding an option to directly write all stdout to some file.
It's a very common use case, and it's quite similar to Unix`tee.
Other possibility would be to do dynamically increase the buffer, maybe up to some (configurable) threshold - upon which some error, or at least warning, would be issued.
The problem is (I believe) in the direct usage of pipes - running into the warning given in the Python subprocess docs. I won't have time to look into this for a little while.
When the called command has large output, and called with TEE mode, some the of the output is lost.
See these 2 files, for example:
long_output.py
:And
problem.py
:Every time I run it, the outcome is a little bit different. But on all occasions I don't get all the 4000 lines that
long_output.py
produces.I assume that some internal buffer holding
stdout
just get overflowed. This could be solved by adding an option to directly write allstdout
to some file. It's a very common use case, and it's quite similar to Unix`tee
.Other possibility would be to do dynamically increase the buffer, maybe up to some (configurable) threshold - upon which some error, or at least warning, would be issued.