SeleniumHQ / docker-selenium

Provides a simple way to run Selenium Grid with Chrome, Firefox, and Edge using Docker, making it easier to perform browser automation
http://www.selenium.dev/docker-selenium/
Other
7.89k stars 2.51k forks source link

Investigate RAM considerations in the Nodes #15

Closed mtscout6 closed 4 years ago

mtscout6 commented 9 years ago

@c0nstructor Brought up a valid concern about RAM consumption when running a node that only drives a single browser session at a time. (See issue #14)

mtscout6 commented 9 years ago

@psftw thoughts?

c0nstructer commented 9 years ago

@mtscout6 I spawn hub, 4 nodes, it bloats up to 1 gigs of rams. My solution was 1 node parallel spawns.

psftw commented 9 years ago

From another comment I just made:

In the context of Docker, I agree that the single-browser-per-node model makes the most sense, but users are always free to customize the configuration and run whatever works for them.

To be clear, I'm not the official litmus tester of Docker Best Practices, but my take is that both ways are OK. I think we should recommend the single-browser-per-node model, but also mention it's drawbacks (resource consumption)? Seeing some numbers with a good test case could be enlightening.

psftw commented 9 years ago

FWIW @mtscout6, I'm more concerned about the xvfb-run wrapper than what the Selenium server orchestrates on the backend.

c0nstructer commented 9 years ago

@psftw Is it possible that you make one image of xvfb?And node-crhome/firefox to use one container of xvfb, and not instantiate it everytime? As I understood, xvfb runs like a server, it manages video buffer, and every application that uses gui goes thru xvfb. No need for multiple instantiations.

mtscout6 commented 9 years ago

I have created a stress test that runs multiple nodes with many tests with one hub. The test will log the memory.stat file every second for each Node. Tomorrow I will plan to review memory usage with one browser instance vs. many per Node. Do either of you see anything wrong with the way that I'm gathering those memory usage stats?

mtscout6 commented 9 years ago

Docker Selenium Memory Usage Test Results

Test Hub Count Node Count Sessions Per Node Test Count Average Node Mem Average Swap Node Mem Test Duration
stress-chrome-1 1 10 1 200 179.56 Mb 0 Mb 00:04:09
stress-firefox-1 1 10 1 200 229.97 Mb 0 Mb 00:05:10
stress-chrome-2 1 10 2 250 207.10 Mb 0 Mb 00:03:48
stress-firefox-2 1 10 2 250 280.38 Mb 0 Mb 00:04:32
stress-chrome-2.5 1 5 2 200 187.46 Mb 0 Mb 00:09:20
stress-firefox-2.5 1 5 2 200 219.87 Mb 0 Mb 00:04:06
stress-chrome-5 1 3 5 200 323.77 Mb 0 Mb 00:02:57
stress-firefox-5 1 3 5 200 316.01 Mb 0 Mb 00:04:36

The metrics collected reflect the memory usage for the container named.

Full logs can be found here

If we bumped the allowable sessions up to 2 we'd only get a win of 30-100Mb for Chrome, and 50Mb with Firefox. I don't know if that's really enough of a bump to justify increasing the session count. Thoughts?

PS. I do know that the metrics tests I used are incredibly simple and I don't know how it would come out with a larger test.

c0nstructer commented 9 years ago

Hey, sorry for not responding so long. I didn't get a chance to test it out like you did, but, when you spawn 1 node, works like a charm. I tried to spawn on 1 node 5 sessions, so it might have something with memory. I am now spawning 1 hub, and 5 nodes per hub, hoping that will solve my memory issues. Also there is an error if you spawn more nodes on one hub, even you write in your hub config file 20 sessions (not node). I hope i will resolve this by spawning multiple hubs, and on 1 hub one node.

ghost commented 9 years ago

We used to run 4 nodes per one VM and our issue ended up being a CPU maxing out, not memory. I have since dialed down a VM with 2 cpu to run 1 node but I increased to 2 browser sessions per node. We get about 80% max cpu utilization. I start a node wit JAVA_OPTS=-Xmx4G since the VM has 6 G RAM. this seems a nice box to build grids. My standard grid is a hub on 1 VM and 20 nodes on 20 VM machines. Works for us.

Julioevm commented 5 years ago

I've been finding selenium using more and more ram inside the node containers ever since I moved from 3.11 to 3.14x version of the containers. I'm currently using selenium/node-firefox:3.141.59-radium with --privileged and --shm-size 2g parameters Now they start at around 150Mbs but after several tests have run though these nodes the selenium ram usage keeps growing. I could run 15 nodes at once on a google cloud machine with 15 Gb of ram and now I had to lower it to 10 nodes and up the ram to 20 to prevent the machines from running out. I re-create the containers after every test, but still some test have hundreds of executions and the ram usage, I believe, is too high. Could this be a memory leak? The driver sessions quit at the end of each execution. Here are the selenium processes on a machine running some nodes inside containers. image Here are the size of the selenium processes after a while: image

diemol commented 5 years ago

@Julioevm is there a way that you can provide a scenario to reproduce that memory usage?

Julioevm commented 5 years ago

@Julioevm is there a way that you can provide a scenario to reproduce that memory usage?

Ill try to set up an scenario to reproduce this.

diemol commented 4 years ago

I am going to close this as it has become stale.

If someone finds issues with this, please open a new issue with enough information.

lock[bot] commented 4 years ago

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.