Closed kevin-niland closed 3 months ago
Difficult to say where the issue is, given so many components. It doesn't seem to be in TigerVNC at least, given that it reports a clean disconnection.
I would suggest using tcpdump between the relevant components and see the order of events. That should tell you which component is initiating the shutdown.
I have a few components in place that allows me to access a remote desktop at https:///path/to/vnc.html. This includes a Kubernetes service, Ingress, a broker that performs a remote proxy to websockify in Go, and my pod which starts Xvnc and websockify:
/usr/bin/Xvnc :10 -auth $HOME/.Xauthority -fp /usr/share/fonts/misc,/usr/share/fonts/75dpi,/usr/share/fonts/100dpi,/usr/share/fonts/Type1 -listen tcp -pn -rfbauth $HOME/.vnc/passwd -rfbport 5910 2>&1 &
websockify --verbose -D --cert=/rd.crt --key=/rd.key 6080 localhost:5910
A HTTPD sidecar provides the /usr/share/novnc/ that would normally be specified when running websockify.
The initial request will go Ingress to handle the websockify traffic (serves novnc websocket connection), which then sends it to the kubernetes service. A broker pod then proxies http/websocket connections. The actual pod will then receive this request and the desktop can be viewed.
However, I've noticed that the desktop will disconnect after around a minute. I've tried specifying a timeout in the above novnc and websockify commands, but that doesn't make a difference. Here are the logs from the pod:
Would there be something in the ingress or the reverse proxy I have that is causing this disconnect? As I have tested through Remmina and the desktop does not disconnect.