Closed foxlegend closed 7 years ago
First of all thank you very much for using Arquillian Cube and reporting such a bug. When I implemented this it was more like a feature not really sure that would be used or not. Now it seems it has so sense and needs to be fixed.
So Absolutely for a PR!!!!
It seems I made a mistake… I was working against an older Arquillian Cube version which docker-java version was not updated (to be honest, I was working with the Alpha5… It took me some time to notice it).
With the latest version of cube, logs are fetched with a callback class, which writes directly in the outpustream without buffering issues (introduced in newer versions of docker-java)… Then, the bug I submitted does not exist in current Cube release…
I'm so sorry, please accept my apologize about that…
Hey @foxlegend no need for apologies. Thank you so much for effort investigating it and coming back with detailed feedback.
Issue Overview
Using the "log" directive into a "beforeStop" section results in an OutOfMemoryError exception.
Expected Behaviour
Logs should be copied into the specified target directory.
Current Behaviour
An OutOfMemoryError is raised (I'm sorry, I didn't used the details balise as it doesn't take into account indents, etc.) :
Steps To Reproduce
Additional Information
I tried to investigate the problem, and I found the problem occurs in the following lines of the readDockerRawStream of the DockerClientExecutor class: DockerClientExecutor.java.
There are no control about the size thing extracted from the given headerBuffer: I think a better way would be to use a intermediate fixed length buffer to copy data from InputStream to OutputStream. Another way may be to use IOUtils.copy() from commons-io, but it is a transitive dependency from docker-java…
Are you interested by a PR?
Thank you :)