Closed nmilovanovic1985 closed 2 years ago
@nmilovanovic1985 any traffic dump could help to diagnose this.
HI
I have compared the traffic between chrome 101 and 102 versions using tcpdump and if my observation is correct there is a difference in outgoing calls that are made by chrome, there is a larger number of call targeting 104.22.17.242.
11:14:21.129796 IP 10.x.x.x.51705 > 104.22.17.242.https: UDP, length 45 11:14:21.129916 IP 10.x.x.x.51705 > 104.22.17.242.https: UDP, length 45 11:14:21.129969 IP 10.x.x.x.51705 > 104.22.17.242.https: UDP, length 45 11:14:21.130094 IP 10.x.x.x.51705 > 104.22.17.242.https: UDP, length 45 11:14:21.130148 IP 10.x.x.x.51705 > 104.22.17.242.https: UDP, length 45 11:14:21.130197 IP 10.x.x.x.51705 > 104.22.17.242.https: UDP, length 45
11:14:21.132078 IP 104.22.17.242.https > 10.x.x.x.51705: UDP, length 1200 11:14:21.132127 IP 104.22.17.242.https > 10.x.x.x.51705: UDP, length 1200 11:14:21.132127 IP 104.22.17.242.https > 10.x.x.x.51705: UDP, length 1200 11:14:21.132177 IP 104.22.17.242.https > 10.x.x.x.51705: UDP, length 1200 11:14:21.132177 IP 104.22.17.242.https > 10.x.x.x.51705: UDP, length 1200 11:14:21.132177 IP 104.22.17.242.https > 10.x.x.x.51705: UDP, length 1200 11:14:21.132177 IP 104.22.17.242.https > 10.x.x.x.51705: UDP, length 1200 I also found something about the new security chrome implementation that can be affected if chrome is used from a secure-private network. I am not sure if this is the source of the problem and is there a way to suppress these outgoing calls.
What is very strange is that I couldn't find any similar issue reported and I and my team have been struggling with this for some time.
I also tried with "InsecurePrivateNetworkRequestsAllowed" policy enabled, but it didn't have any effect
https://bugs.chromium.org/p/chromium/issues/detail?id=1329852&q=internet&can=2
Also found this issue and added a net-export attachment in the comment.
we have the same issue. on chrome images >101.0 network reach over 200 Mbps. any input on that?
we have no issue with chrome 101, only 102 and newer versions, and still waiting on our network team to determine what is consuming our entire network when we run our tests... very frustrating... :(
The latest finding that we have is that when all container or active download on our docker nodes is very high
A couple of things that I want to add regarding this finding:
Is there a possibility that on container start something triggers a download of some missing layers in the background?
We will try to find the source of the download usage later this day so I will give an update if we find something new
@nmilovanovic1985 we just install official browser packages to our images. A tcpdump capture of the traffic could help to diagnose.
@vania-pooh Here is a link to download tcpdump the file it was too big to upload here.
Thanks in advanced
So latest response on this in chromium issues from Google is as followed : "are you folks starting a bunch of tests with a fresh --user-data-dir every time or something? That thing may not work so well if you start fresh every time, though I am not sure what to suggest"
@Dor-bl chromedriver
itself probably does this itself.
@vania-pooh And this was the second comment from Google maintainer : Hmm, so I can't find an off switch, but something like --component-updater=initial-delay=86400 might help, or maybe --component-updater=url-source=https://some-syntactically-valid-but-not-resolving-url/
Same result when I used this switch... :(
@vania-pooh @Dor-bl Finally, I've found a solution that works for us by disabling chrome component updates
In /static/chrome/Dockerfile i added:
RUN \ mkdir -p /etc/opt/chrome/policies/managed/ && \ chown selenium:nogroup /etc/opt/chrome/policies/managed/ && \ cd /etc/opt/chrome/policies/managed/ && \ touch component_update.json && \ echo {"\n""\"ComponentUpdatesEnabled\"": false"\n"} >> component_update.json
With this component-update policy was disabled and there is no download issue:
@nmilovanovic1985 need to add this to our images.
Will rebuild Chrome 105.0 image soon.
Should be available in the browsers/chrome:105.0
and other images with the same Chrome version.
Setup that we are using: On one server we have GGR On separate two servers, on each server we have 4 Selenoid docker nodes that can run up to 6 docker containers in total 48 chrome test containers Selenium tests are executed via Jenkins We create chrome images locally using aerokube/images
After we created an image for chrome 102.0.5005.61 we found an issue with internet consumption, our download goes from 180 mb/s to 0 mb/s in just a couple of minutes when all 48 containers are running. We have the same issue with all the new chrome versions that came after 102.0.5005.61.
At the moment we are still running our tests on chrome 101.0.4951.64 with this version we have no issues and network impact is minimal with all 48 containers running.
We have pulled all the latest versions of selenoid, ggr, selenoid-ui, and upgraded to image 7.3.0, we have pulled both selenoid/chrome:101.0 and selenoid/chrome:102.0 chrome image from docker hub without building them locally and reproduced the same behavior where chrome 101 is working with no issue and 102 kills the network…
This problem blocks chrome upgrade for us, if you have any idea what the problem can be it would be nice, and if any more info needs to be provided from our side I will be more than willing to share
Thanks