Closed ghost closed 2 years ago
I will have to look at your code tomorrow but if you run it w/o parallel or with parallel = 1 it will probably pegs out a core depending on your website.
If your page is a simple static html page then it will probably only use 25-50% of a core. For a SPA, there is a ton of JS being downloaded/parsed/compiled etc, then css layouts calculated, api calls made and all the json parsed etc. All of this is very intensive. Then you couple that with the 'user' interacting with the page within milliseconds, transitioning pages, clicking, expanding etc, one or more cores will be pegged out.
Take all of that and multiply by 6 and your CPU will be maxed.
There are three things you can do help you determine the sweet spot.
1) Try Parallel = 1, then 3, then 6 and see about how many cores each level of parallelism works out to be.
2) Do a similar setup, but have it go to the canopy test page and run some tests basic load page - > click button -> validate
, in a loop to see how that behaves at 1,3,6 parallelism. (http://lefthandedgoat.github.io/canopy/testpages/)
3) Manually open the page you have under test and watch the CPU and do things like CTRL+R for a hard refresh, go from a simple page to a page with the most going on and watch the CPU spike etc. That will help you understand how intensive the page under test is. Also trying it on an old low powered laptop will help you get a feel for how resource intensive it is.
A couple of other things you can do that are specific to canopy: https://lefthandedgoat.github.io/canopy//Docs/configuration.html
Set to true if you dont have any iframes that you need to interact with
optimizeBySkippingIFrameCheck <- false
If you do not use all of the default finders, you can create your own list so it won't attempt + fail the useless ones
configuredFinders <- finders.defaultFinders
(https://github.com/lefthandedgoat/canopy/blob/master/src/canopy/finders.fs#L95-L107)
Hope this helps!
Thanks @lefthandedgoat - that's really helpful info! I don't think we have iframes so I've added optimizeBySkippingIFrameCheck
as it sounds like a sensible thing to do.
The website under test is multiple pages, but there is quite a lot going on, so maybe 6 at once it pushing it too far. Just tried Parallel = 3 and CPU usage constantly jumps between anything from 30 - 100%, but in a run of 60 tests, all instances eventually closed down so it feels like an improvement. The tests will be running on VMs via Azure pipelines anyway, not locally, so running less in parallel and tests taking a bit longer won't be an issue.
If we could keep this issue open for now that'd be great; once I've managed to do more testing I'll report back.
Thanks again, hope you have a great Holiday season!
I have a much older CPU, i7-4790K 4 physical + 4 hyperthreaded @4ghz base.
Here are the some results. Tests you wrote x1 x3 x6
Using a kendo ui page that is heavier, x1 x2 x3 x4 x6
I think that chrome is very good at using a lot of cores. If you have 8, 12, 24 cores then you will get some gains by launching a few more browsers. If you are testing a page that has a lot of slower API calls, I think you will win with parallelism also.
Ultimately you will just have to test it out on your real world code, especially in your pipeline and try a few different settings to see what gives the best results.
My fork with the kendo tests: https://github.com/lefthandedgoat/canopyTemplate
Sorry for the delay - that's amazing @lefthandedgoat; thank you so much. I'll indeed have a play around with it and try to find the 'sweet spot' based on the hardware I'm using, but this is so useful. Happy New Year!
Description
I am running my tests in parallel using
canopy.parallel.functions
. I have an issue with one of my canopy test suites which has a fairly large number of tests (110), many of which take over a minute to run. During test execution, the system resources start getting hogged, and it get progressively worse to the point where the machine is unusable, and I have to run a script to kill all instances of chrome.exe and chromedriver.exe to alleviate it. Initially, each instance of Chrome closes once the test is complete, but as resources get used up they stack up and I end up with lots of open instances.Task Manager shows that during test execution, CPU is at 100% for 'System interrupts' and 70% Memory (which creeps up over time), mostly for Chrome. I have set parallel execution to 6 (per core of my processor) with NUnit's
LevelOfParallelism
attribute.In my TearDown function I am calling
quit browser
, so not sure what could be causing this apparent memory leak. Is there a more aggressive function to call thanquit
?The only possible thing I can think of is that I do not pass in the browser to all of my functions; but as far as I am aware these are only functions that do not interact with the browser, eg: creating a directory, trimming strings, that sort of thing. Either way, surely once the test is done, the resources should be freed up.
Any pointers would be greatly appreciated - thank you.
GitHub repo
I've put together a basic repo using the canopy test page so that you can see how I have structured the framework. Note that this example actually seems to run correctly, eg: only 6 instances are ever open simultaneously and each test nicely closes down once finished. It would be good to see if anyone can see fault in the structure; otherwise perhaps it's because I'm doing something funky in some of my real world tests: https://github.com/jamescodes85/canopyTemplate
System