Open consideRatio opened 7 hours ago
Based on https://github.com/jupyterhub/configurable-http-proxy/issues/388#issuecomment-2359227825, I think this may not be an issue with CHP as much as the software running in the user servers leading to a flood of connections be initiated via the UI.
@felder this is a followup to https://github.com/jupyterhub/configurable-http-proxy/issues/388#issuecomment-2359416947. I inspected two active deployments with 222 and 146 currently active users respectively.
From inspection, it seems this makes use of jupyter_server 2.12.1 and jupyterlab 4.0.9.
This is from a CHP pod with a hub currently having 222 current user pods running the image quay.io/2i2c/utoronto-image:2525722ac1d5
, where users may be accessing /tree or /lab and its not clear what distribution of UI usage among those.
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED | grep 8081 | wc -l
80
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED | grep 8888 | wc -l
1416
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED | wc -l
1609
pip list
This is from a CHP pod with a hub currently having 146 current user pods running the image quay.io/2i2c/utoronto-r-image:5e7aea3c30ff
, where users are accessing /rstudio.
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED | grep 8081 | wc -l
5
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED | grep 8888 | wc -l
1164
/srv/configurable-http-proxy $ netstat -natp | grep ESTABLISHED| wc -l
1250
From inspection, it seems this makes use of jupyter-server 1.24.0 together with rstudio stuff in the frontend.
pip list
This doesn't rule out CHP- to do that you'd need to compare this with another proxy like Traefik. For example, if CHP isn't closing connections as fast as the browser this could lead to too many ports in use.
Do the existing CHP tests cover HTTP persistent connections? https://en.m.wikipedia.org/wiki/HTTP_persistent_connection
One thing I'm noticing as I investigate is that user servers that use lab (as opposed to rsession-proxy or the like) interact with the hub pod a lot more often. Anytime I interact with the file browser, launcher, etc last_activity for the hub pod route in chp updates. This is not the case if /rstudio is designated as the default URL.
Additionally the ESTABLISHED connection count to hubip:8081 with a single user pod running lab (as opposed to rstudio) increments pretty steadily as I do things like kill the pod, kill the kernel, refresh the browser, etc.
This doesn't rule out CHP- to do that you'd need to compare this with another proxy like Traefik. For example, if CHP isn't closing connections as fast as the browser this could lead to too many ports in use.
i believe this might be happening... if a user closes their laptop, or opens their notebook in a new browser (which happens more often than you'd imagine) we see a lot of spam (hundreds of 503s being reported) in the proxy logs:
21:08:06.483 [ConfigProxy] error: 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
21:08:06.491 [ConfigProxy] error: 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
21:08:06.514 [ConfigProxy] error: 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
21:08:06.533 [ConfigProxy] error: 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
21:08:06.536 [ConfigProxy] error: 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
21:08:06.561 [ConfigProxy] info: Removing route /user/<hub user>
21:08:06.561 [ConfigProxy] info: 204 DELETE /api/routes/user/<hub user>
21:08:15.521 [ConfigProxy] info: Adding route /user/<hub user> -> http://10.28.26.176:8888
21:08:15.521 [ConfigProxy] info: Route added /user/<hub user> -> http://10.28.26.176:8888
21:08:15.521 [ConfigProxy] info: 201 POST /api/routes/user/<hub user>
21:08:18.845 [ConfigProxy] info: 200 GET /api/routes
Hmmm, so we have a spam of 503 GET /user/<hub user>/api/events/subscribe connect ECONNREFUSED 10.28.21.53:8888
, where something (jupyterlab in browser?) tries to access a user server, but the proxying fails with connection refused - perhaps because the server is shutting down or similar.
After that, jupyterhub asks CHP to delete the route.
After that, I expect the thing that got 503 now won't get 503 responses because the proxy pod won't try to proxy to the route any more, instead it will do something else --- maybe redirect to the hub pod as a default route - which then gets spammed.
@shaneknapp I guess that we can see some redirects with debug logging or similarly - or can we see redirect responses from CHP already and we aren't seeing them?
I think /api/events/subscribe
are associated with websockets, an endpoint added in jupyter_server 2.0.0a2. Is something related to jupyterlab's browser side code re-trying excessively against that when failing?
From the logs i see one failed request every ~10ms five times in a row, which I guess means no delay between re-attempts etc.
21:08:06.483
21:08:06.491
21:08:06.514
21:08:06.533
21:08:06.536
@minrk I recall that you submitted a PR somewhere, sometime a while back, about excessive connections or retries. Was this to this endpoint?
So when running lab, when I do things like kill my pod or start up another connection from another tab or browser I tend to be able to get chp to emit 503 messages similar to:
23:48:41.600 [ConfigProxy] error: 503 GET /user/felder/terminals/websocket/1 connect ETIMEDOUT 10.28.35.109:8888
23:48:49.793 [ConfigProxy] error: 503 GET /user/felder/api/events/subscribe connect ETIMEDOUT 10.28.35.109:8888
...
00:01:16.903 [ConfigProxy] error: 503 GET /user/felder/api/kernels/d9472c13-5a55-47cf-a569-ed981f709bbf/channels connect ECONNREFUSED 10.28.8.3:8888
00:01:16.905 [ConfigProxy] error: 503 GET /user/felder/api/kernels/d9472c13-5a55-47cf-a569-ed981f709bbf/channels connect ECONNREFUSED 10.28.8.3:8888
00:01:16.907 [ConfigProxy] error: 503 GET /user/felder/api/kernels/d9472c13-5a55-47cf-a569-ed981f709bbf/channels connect ECONNREFUSED 10.28.8.3:8888
00:01:16.974 [ConfigProxy] error: 503 GET /user/felder/api/kernels/d9472c13-5a55-47cf-a569-ed981f709bbf connect ECONNREFUSED 10.28.8.3:8888
This does make sense when I'm killing my user pod since the server is no longer there at that ip.
However, when this happens I see a correlated increase in the number of established connections from chp->hub:8081. Those connections seem to persist.
Noting that if I delete the route to the hub pod in chp, the connections still persist.
From https://github.com/jupyterhub/configurable-http-proxy/issues/388#issuecomment-2359217928 and onwards is context on how a CHP pod can end up running out of ephemeral ports, with a mitigation strategy in https://github.com/jupyterhub/configurable-http-proxy/issues/388#issuecomment-2362097477.