Closed stwist-timeless closed 7 years ago
@stwist-timeless Thanks for the detailed report. It does seem like a memory leak somewhere. Your use case is definitely one of the more stressful ones we've seen.
Repeating a request should not have any performance side effects. In fact, your use case was what it was made for. Storing a stringified JSON object (as long as it is not extremely huge) should also not affect performance. In your case, a 100 character JSON is definitely not huge.
We're actually working on a redesign of the runner, which should address these issues. You can download the Canary release for Mac here. I'd recommend you to join our Slack community here so you can provide feedback before the runner moves to our stable builds :)
Do let us know if your issues still occur in the new runner.
@madebysid Thanks for getting back to me, and for pointing me towards the new runner in Canary. Firstly - I love the new runner! Definitely a huge improvement on the previous version. I have a few areas of feedback regarding it (just some suggestions, mostly), which I will list shortly.
Unfortunately, the new runner didn't fix my issue. It certainly appears to work better than the previous runner. It's faster to execute requests, and is better with memory management. However, there was still an overall climb in memory with each request that was made. After about 20-30 minutes of running tests, it had climbed to 1.5GB memory usage (which is well below the maximum free memory of my machine), and then crashed to desktop. I noticed that requests were originally taking < 250ms per request (which is faster than in the old runner), but by the time it got close to crashing, Postman was only making requests once every 3-4s (the server was responding within the normal sort of response time of 50-100ms).
Does the runner keep the HTTP response for each request in memory? I wonder if that is what is causing issues. Most requests return a JSON body (ranging in size from a few kb to 500kb per response). If the runner is keeping all these in memory, perhaps this is causing it to struggle over time?
Or perhaps it has to do with variables being declared in the post-request testing? I'm quite liberal with my use of local variables within the test, under the (perhaps incorrect) assumption that they would be freed from memory after the test script completes (for that request). Perhaps this isn't the case?
With regards to the new runner, I have some feedback (plus a potential bug) - please let me know if you would prefer me to submit a new issue and/or submit the feedback elsewhere (e.g. on the slack channel).
I really like the inclusion of a 'percentage complete' dial. It's really useful to have an idea of how far through a test run you are currently. However, it looks like the non-linear repeated tests are causing it problems with predicting how far through the test run it is. The percentage increased over time, but it reached 99% (it even showed 100% for a second, then dropped back down to 99%) way before the end of the run. My guess is it's maybe seeing how many requests there are (total) and then counting each request being made - however, if it counts each of the non-linear requests, every time the request is made (which it appeared to, from how the percentage increased during these requests), it will quickly reach the total quantity before testing is actually complete. Perhaps it needs to count the non-linear requests just the first time it executes them?
Also, I like how the results are presented, with each request containing each of the tests nested underneath it. I would really like to see this extended further, to have each request nested under the folder that it is contained in (in Postman) - and then, once we have the ability to have nested folders in Postman (something I am eagerly anticipating), to also have these nested folders show up in the runner results. This would be a huge improvement for me, so I could see quickly which folder(s) (for example) passed/failed - how many tests passed/failed, perhaps even how many requests passed/failed within a folder. It would also help me identify exactly which request failed. I have many requests that are named the same, but the folder they are located within dictates exactly what the request is testing. I can sometimes decode this from the name of the request plus the endpoint it is hitting, but it would be a lot easier if I could see the folder it was nested under.
Thanks again for your response. Please let me know if there's anything I can try on my end to help figure out what's causing my issues with the runner.
@stwist-timeless Thanks for the feedback, I really appreciate you taking time to do this!
The new runner does store the responses in memory if they are less than 300kB in size for the latest 5 runs. Looking at your use case though, we might have to tweak this behaviour so that the responses are stored with a lot more caution. Your variables are also garbage collected over time, so that should not be an issue either.
As for the percentage dial, we have no idea of knowing how many requests will actually be run before they actually do run, so what I do is an incremental increase, with the total requests changing over time in case of a setNextRequest. Perhaps I could look into making this prediction a little better.
Our community's canary
channel would definitely be better for feedback, as I'd be able to respond to you faster there :)
I'll keep you updated on the status of the issue here.
The new Runner is out now with our latest release (4.9) and has a dropdown to let you select how many (and types of) responses to log. This should help reduce the memory usage.
Thanks for all the feedback. Closing this.
Is there a corresponding option in newman to choose what responses to store?
@anuragbhalla Newman does not store any responses. If you're trying to do this, you might want to take a look at Newman Reporters
I've also had problems with memory usage in Postman when running a collection with a large data set. I have a very simple API with one input, but a data set consisting of more than 300.000 entries. When running this, Postman eventually crashes and the memory usage creeps up when running. I've tried splitting the data set into separate files consisting 50.000 entries, but this will also crash.
But, I HAVE FOUND A SOLUTION: Running the collection from the Postman CLI tool seems to use nearly no resources on memory and CPU. When running from the GUI, one collection run will consume 1 CPU thread 100%, and memory usage keeps creeping up. When running 7 simultaneous collection runs from Postman CLI, CPU and memory usage is negligible. So it seems that the issue is with the GUI, not the underlying API call processes.
So my conclusion is, if you are able to run from CLI, try that. But you of cause loose the benefits of the GUI.
Regards, Ulrik
Hi,
I'm not sure if this is a bug vs. something I am doing wrong.
The setup:
I am using Postman + Collection Runner to test an API I am developing. I have one large collection that makes up my entire test suite. It currently consists of 1241 requests (sorted into about 200 or so folders). Each request processes anywhere from 2 to 10 tests.
A few of my requests use non-linear execution to loop a request anywhere up to 800 times, each time requesting a different URI. I'm using these tests to iterate over an entire database of items, and test each one to ensure it can be retrieved and meets the required formatting etc.
In addition, I am storing data from requests in environment variables for use in later requests. Often these are simple strings, but sometimes I am storing large JSON objects. For example, I am looping over 800 items in our database (one request, executed many times). After each request, I am storing a small amount of data into an array, stored in an environment variable (stringified JSON). Each request I get the environment variable, JSON parse it, update the array, and then JSON stringify it back into the environment variable. Each request maybe adds at most 100 characters to the JSON string.
It should be noted that not all requests return a 2XX status code. Many requests are designed to get a 3XX and 4XX response from the server, testing to make sure such a response is returned (e.g. unauthorized).
The problem:
I can run each request individually, via Postman, fine. I can run each folder individually, via Collection Runner, fine. However, if I run the entire collection in Collection Runner, the following happens: * At first, requests execute at approximately 1 request per second (this is typical, due to the response time of my local server, the time it takes to process the javascript tests etc.) * At first, Postman is using maybe 100MB of RAM. * Over time, the time between requests slows down. The 'response time' doesn't increase (I'm watching the logs from my server - the server itself is consistently showing a response time of around 100ms), but the time between requests (again, I can see them hitting the server) slows down. * Over time, Postman's memory usage climbs. It goes up and down during the testing, but in general it creeps upwards. It never seems to release as much memory as it consumes. Eventually, it is using GBs of memory. If left to try and run the entire collection, I've seen it get up to 16GB RAM usage. * Eventually Postman just flat-out crashes back to desktop, usually once my machine runs out of physical RAM and starts page swapping. I have not been able to complete an entire run of the collection.
The net result is that I cannot run the collection in it's entirety. It will run for a few hours, eat up a huge amount of RAM, and then crash to desktop.
My question:
Firstly, I feel like maybe I am doing something wrong? Is storing large stringified JSON objects to environment variables my downfall in this scenario? Or perhaps repeating a request 800 times?
If this kind of usage is acceptable, is it possible that there's a memory problem in Postman under this usage? I would expect the memory usage to go up a little bit as I make larger environment variables, but certainly not to the point of crashing. If I run each folder in turn (rather than the entire collection), I get the same environment variables (which persist when restarting Postman) - so, the environment variables themselves don't take up that much RAM. It's only a little bit of text, after all, not GBs of the stuff. Perhaps Postman isn't freeing up memory after each request that it could be?
The only thing I can think of trying is to see if I can run the same collection of requests via newman, but I haven't had time yet to try and get newman setup and running.
Does anyone have any ideas?
Thanks! Steve
Version/App Information: