Closed mstaalesen closed 8 years ago
@mstaalesen Yes, we've reproduced this issue as well. We're working on this, but don't have any ETA as of now.
@mstaalesen We've noticed this issue too. It's especially annoying when running many iterations. It is because of leaks in Node's vm
module: https://github.com/nodejs/node-v0.x-archive/issues/637 and https://github.com/nodejs/node-v0.x-archive/issues/6552
I've not yet fully investigated the issue with Node v4+, will update this thread with the results.
Is there any update on this? The linked node bugs seem to reference each other and tail off w/o a solution.
I was able to find a work around by changing the bin/newman shebang to pass --max-old-space-size=8192 to node.
Newman v3.x fixes this issue. Give it a try - https://github.com/postmanlabs/newman/tree/feature/v3
will do, thank you!
Still failing with the 3.x release. I got to iteration 75/8845 and then got this GC error:
`<--- Last few GCs --->
325805 ms: Mark-sweep 1205.2 (1458.0) -> 1205.0 (1458.0) MB, 1163.8 / 0 ms [failed to reserve space in paged or large object space, trying to reduce memory footprint] [GC in old space requested]. 327014 ms: Mark-sweep 1205.0 (1458.0) -> 1205.2 (1458.0) MB, 1208.7 / 0 ms [failed to reserve space in paged or large object space, trying to reduce memory footprint] [GC in old space requested].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x352ffc0b4629
^C `
Modifying the she bang for node to include --max-old-space-size still works in 3.x. However, the memory growth rate of the 3.x newman vs 2.x looks to be much much worse. With 16GB of memory, I used to be able to get through ~8,000 iterations in 2.x while in 3.x I am not even getting to 750 iterations before it's hitting 16GB and starting to hard stop for GC on every iteration.
I'm going to explore creating my own nodejs script to split up the ~8,000 iterations into smaller chunks and run them concurrently with node cluster.
I think the issue is primarily because of storing executions and responses. We would add an option to store responses only when configured to do so. @czardoz
@mstaalesen @Zambonilli A fix for this issue is under construction, you can follow it's progress here.
@mstaalesen @Zambonilli The memory usage should have been cut down in Newman v3.1.1
, please check it out and get back to us if your issue persists. Thanks!
Looks great! I ran version 3.1.2 to 3600 iterations and the memory usage was much much better. Thank you for your hard work on this.
I'm glad it was helpful. We have found ways to further optimise memory by making the reporters leaner.
There seems to be a memory leak in newman 1.2.23 when running more than one iteration on a collection. For my collection, it will crash after roughly 20 iteraions with the following error:
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory Aborted (core dumped)
The collection concists of ~800-900 tests (asserts) in 105 POST/GET requests.
At first I thought it could be related to environmentvariables being set by the tests, but I am clearing them per folder (12-15 folders, so roughly every 10th request). When I start newman, it allocates roughly 150MB of Memory, but after ~20 iterations it will have passed 2 GB and it just continues untill it crashes.
It seems like newman is storing something (the server responses perhaps?) and never dropping it. Have you seen similar behavior to this?