Open consideRatio opened 2 months ago
Based on https://github.com/jupyterhub/configurable-http-proxy/issues/388#issuecomment-2359227825, I think this may not be an issue with CHP as much as the software running in the user servers leading to a flood of connections be initiated via the UI.
@felder this is a followup to https://github.com/jupyterhub/configurable-http-proxy/issues/388#issuecomment-2359416947. I inspected two active deployments with 222 and 146 currently active users respectively.
From inspection, it seems this makes use of jupyter_server 2.12.1 and jupyterlab 4.0.9.
This is from a CHP pod with a hub currently having 222 current user pods running the image quay.io/2i2c/utoronto-image:2525722ac1d5
, where users may be accessing /tree or /lab and its not clear what distribution of UI usage among those.
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED | grep 8081 | wc -l
80
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED | grep 8888 | wc -l
1416
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED | wc -l
1609
pip list
This is from a CHP pod with a hub currently having 146 current user pods running the image quay.io/2i2c/utoronto-r-image:5e7aea3c30ff
, where users are accessing /rstudio.
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED | grep 8081 | wc -l
5
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED | grep 8888 | wc -l
1164
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED| wc -l
1250
From inspection, it seems this makes use of jupyter-server 1.24.0 together with rstudio stuff in the frontend.
pip list
This doesn't rule out CHP- to do that you'd need to compare this with another proxy like Traefik. For example, if CHP isn't closing connections as fast as the browser this could lead to too many ports in use.
Do the existing CHP tests cover HTTP persistent connections? https://en.m.wikipedia.org/wiki/HTTP_persistent_connection
One thing I'm noticing as I investigate is that user servers that use lab (as opposed to rsession-proxy or the like) interact with the hub pod a lot more often. Anytime I interact with the file browser, launcher, etc last_activity for the hub pod route in chp updates. This is not the case if /rstudio is designated as the default URL.
Additionally the ESTABLISHED connection count to hubip:8081 with a single user pod running lab (as opposed to rstudio) increments pretty steadily as I do things like kill the pod, kill the kernel, refresh the browser, etc.
This doesn't rule out CHP- to do that you'd need to compare this with another proxy like Traefik. For example, if CHP isn't closing connections as fast as the browser this could lead to too many ports in use.
i believe this might be happening... if a user closes their laptop, or opens their notebook in a new browser (which happens more often than you'd imagine) we see a lot of spam (hundreds of 503s being reported) in the proxy logs:
21:08:06.483 [ConfigProxy] error: 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
21:08:06.491 [ConfigProxy] error: 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
21:08:06.514 [ConfigProxy] error: 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
21:08:06.533 [ConfigProxy] error: 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
21:08:06.536 [ConfigProxy] error: 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
21:08:06.561 [ConfigProxy] info: Removing route /user/<hub user>
21:08:06.561 [ConfigProxy] info: 204 DELETE /api/routes/user/<hub user>
21:08:15.521 [ConfigProxy] info: Adding route /user/<hub user> -> http://10.28.26.176:8888
21:08:15.521 [ConfigProxy] info: Route added /user/<hub user> -> http://10.28.26.176:8888
21:08:15.521 [ConfigProxy] info: 201 POST /api/routes/user/<hub user>
21:08:18.845 [ConfigProxy] info: 200 GET /api/routes
Hmmm, so we have a spam of 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
, where something (jupyterlab in browser?) tries to access a user server, but the proxying fails with connection refused - perhaps because the server is shutting down or similar.
After that, jupyterhub asks CHP to delete the route.
After that, I expect the thing that got 503 now won't get 503 responses because the proxy pod won't try to proxy to the route any more, instead it will do something else --- maybe redirect to the hub pod as a default route - which then gets spammed.
@shaneknapp I guess that we can see some redirects with debug logging or similarly - or can we see redirect responses from CHP already and we aren't seeing them?
I think /api/events/subscribe
are associated with websockets, an endpoint added in jupyter_server 2.0.0a2. Is something related to jupyterlab's browser side code re-trying excessively against that when failing?
From the logs i see one failed request every ~10ms five times in a row, which I guess means no delay between re-attempts etc.
21:08:06.483
21:08:06.491
21:08:06.514
21:08:06.533
21:08:06.536
@minrk I recall that you submitted a PR somewhere, sometime a while back, about excessive connections or retries. Was this to this endpoint?
So when running lab, when I do things like kill my pod or start up another connection from another tab or browser I tend to be able to get chp to emit 503 messages similar to:
23:48:41.600 [ConfigProxy] error: 503 GET /user/felder/terminals/websocket/1 connect ETIMEDOUT 10.28.35.109:8888
23:48:49.793 [ConfigProxy] error: 503 GET /user/felder/api/events/subscribe connect ETIMEDOUT 10.28.35.109:8888
...
00:01:16.903 [ConfigProxy] error: 503 GET /user/felder/api/kernels/d9472c13-5a55-47cf-a569-ed981f709bbf/channels connect ECONNREFUSED 10.28.8.3:8888
00:01:16.905 [ConfigProxy] error: 503 GET /user/felder/api/kernels/d9472c13-5a55-47cf-a569-ed981f709bbf/channels connect ECONNREFUSED 10.28.8.3:8888
00:01:16.907 [ConfigProxy] error: 503 GET /user/felder/api/kernels/d9472c13-5a55-47cf-a569-ed981f709bbf/channels connect ECONNREFUSED 10.28.8.3:8888
00:01:16.974 [ConfigProxy] error: 503 GET /user/felder/api/kernels/d9472c13-5a55-47cf-a569-ed981f709bbf connect ECONNREFUSED 10.28.8.3:8888
This does make sense when I'm killing my user pod since the server is no longer there at that ip.
However, when this happens I see a correlated increase in the number of established connections from chp->hub:8081. Those connections seem to persist.
Noting that if I delete the route to the hub pod in chp, the connections still persist.
What version of jupyterlab and jupyter server / notebook is used?
The original issue was https://github.com/jupyterlab/jupyterlab/issues/3929
I think JupyterLab is supposed to stop checking the API when it realizes the server is gone. But 503 means JupyterHub thinks the server is there when it's not, and 503s should be retried after a delay (I wouldn't be surprised if jupyterlab probably still retries a bit too fast). When the server is actually stopped (i.e. the Hub notices via poll and updates the proxy to remove the route), these 503s should become 424s, at which point perhaps JupyterLab may slow down/pause requests as it should.
I don't expect the singleuser server is making "hairpin" connections to the Hub via CHP, unless things like hub_connect_ip
are customized. The default behavior is for all Jupyter Server -> Hub communication to be via internal network.
I have a strong suspicion that if this tends to happen in JupyterLab and not other UIs, it is related to JupyterLab's reconnect behavior, the use of websockets (less common in other UIs), or both. My hunch is something like this:
@shaneknapp @felder do you have any indication that CHP starts seeing 503 errors before problems start, i.e. that it might be a cause and not merely a symptom? Short-term 503 errors are 'normal' behavior when a server shuts down prematurely, e.g. due to internal culler behavior, so if that triggers a cascade of too many bound-to-fail requests that don't get cleaned up fast enough, that seems a plausible scenario, at least.
@consideRatio jupyterlab 4.0.11 and 4.2.5, notebook 7.0.7 and 7.2.2
@minrk It's possible, honestly we're just trying to wrap our heads around the ephemeral port issue and chp so all possibilities are on the table.
What version of jupyterlab and jupyter server / notebook is used?
data8
data100
datahub
we actually bumped data100 to the most recent versions of these packages yesterday "just to see" if it helped. however, we allocated 16G of ram per user for an assignment as of this morning, and expect fewer kernel crashes and therefore less orphaned (or excess?) ephemeral ports.
Can we test this without JupyterHub? Run JupyterLab, manually start CHP, create a CHP route to JupyterLab, and access JupyterLab via CHP. Based on the above if you open another tab, or Ctrl-C and restart JupyterLab, the number of ports in use should significantly increase.
@shaneknapp @felder do you have any indication that CHP starts seeing 503 errors before problems start, i.e. that it might be a cause and not merely a symptom? Short-term 503 errors are 'normal' behavior when a server shuts down prematurely, e.g. due to internal culler behavior, so if that triggers a cascade of too many bound-to-fail requests that don't get cleaned up fast enough, that seems a plausible scenario, at least.
well, it usually takes a few hours for the "problems" to start as the ports begin to accumulate. during this time (aka all day) we are seeing plenty of 503s. the rate at which they are happening isn't really something that i'd say is trackable as they occur based on how users are interacting with the system (closing laptop, reopening later, new browser tabs etc). and, of course, the more users, the more 503s. and as @felder said, we're still trying to unpack what's really happening here and wrapping our heads around the 503s and the causation/correlation relationship to them and the chp running out of ephemeral ports.
welp, this just happened on a smaller hub w/about 75 users logged in.
the outage started at 330pm, and lasted 15m. users got a blank page and 'service inaccessible'.
you can see the cpu peg at 100% during this time, and eventually it recovered. the proxy ram usage doesn't hit anywhere near what we've allocated as max (3G), but there's some interesting ups and downs in that graph.
this is a more complex hub deployment, w/two sidecar containers along each user container -- one w/mongodb, and the other w/postgres installed.
This is the simplest CHP/JupyterLab setup I can come up with:
Run CHP on default ports (8000 and 8001), no auth, log all requests:
configurable-http-proxy --log-level debug
Create a route /test
pointing to http://localhost:8888
curl -XPOST http://localhost:8001/api/routes/test -d '{"target":"http://127.0.0.1:8888"}'
Start JupyterLab under /test
jupyter-lab --no-browser --ServerApp.base_url=/test
Open http://localhost:8000/test/ in a browser
Show IPv4 connections involving ports 8000, 8001, 8888:
ss -4n |grep -E '127.0.0.1:(8888|8000|8001)'
I've tried reloading JupyterLab in my browser, and killing/restarting JupyterLab. When JupyterLab is killed (Ctrl-C) with the browser still open CHP doesn't keep any sockets open.
There's a burst of connections whilst it loads:
ss -4n |grep -E '127.0.0.1:(8888|8000|8001)'
tcp ESTAB 0 0 127.0.0.1:34776 127.0.0.1:8000
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41964
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41978
tcp ESTAB 0 0 127.0.0.1:34704 127.0.0.1:8000
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41976
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41994
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41988
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:55962
tcp ESTAB 0 0 127.0.0.1:34720 127.0.0.1:8000
tcp ESTAB 0 0 127.0.0.1:41978 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:41988 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41984
tcp ESTAB 0 0 127.0.0.1:41994 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:34724 127.0.0.1:8000
tcp ESTAB 0 0 127.0.0.1:34756 127.0.0.1:8000
tcp ESTAB 0 0 127.0.0.1:55962 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:41984 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:34762 127.0.0.1:8000
tcp ESTAB 0 0 127.0.0.1:34800 127.0.0.1:8000
tcp ESTAB 0 0 127.0.0.1:34740 127.0.0.1:8000
tcp ESTAB 0 0 127.0.0.1:41964 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:41976 127.0.0.1:8888
but it stabilises again:
tcp ESTAB 0 0 127.0.0.1:34776 127.0.0.1:8000
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41964
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41978
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41976
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41994
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41988
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:55962
tcp ESTAB 0 0 127.0.0.1:41978 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:41988 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:8888 127.0.0.1:41984
tcp ESTAB 0 0 127.0.0.1:41994 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:55962 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:41984 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:34800 127.0.0.1:8000
tcp ESTAB 0 0 127.0.0.1:41964 127.0.0.1:8888
tcp ESTAB 0 0 127.0.0.1:45264 127.0.0.1:8000
tcp ESTAB 0 0 127.0.0.1:41976 127.0.0.1:8888
I haven't seen any continual rise in the number of sockets if I repeat the reload/kill/restart cycle.
@manics Just wanted to note that the connections we're concerned about are going to hubip:8081, not sure if that's represented in your test. The issue is not connections from chp to/from user pods but instead from chp to/from the hub pod. The userpod destinations vary enough where there is no concern about ephemeral ports there (as of yet). It's the hub pod which is a single destination/port pair where we see the issue.
@felder I haven't tested that since CHP shouldn't be proxying pods to hub:8081, the pods should connect directly to the hub:8081.
External Hub API requests will go through CHP though, this includes requests made through the Hub admin UI.
Can you turn on debug logging in CHP, filter the logs for connections to :8081
, and show us the URLs that are being requested? This should narrow down the component that's responsible.
@manics perhaps this comment describes what we are seeing better?
https://github.com/jupyterhub/configurable-http-proxy/issues/388#issuecomment-2359416947
I’ll see what debug logging reveals.
From https://github.com/jupyterhub/configurable-http-proxy/issues/388#issuecomment-2359217928 and onwards is context on how a CHP pod can end up running out of ephemeral ports, with a mitigation strategy in https://github.com/jupyterhub/configurable-http-proxy/issues/388#issuecomment-2362097477.