Closed steffen-kiess closed 2 years ago
I don't understand the purpose of this. It would only be useful if we also added support for Unix domain socket connections to the TurboVNC Viewer, and then only if connecting from the same machine (which is a very uncommon use case.)
(I also note that TigerVNC only includes this feature in their server and not their viewer, so my more specific question is: whose viewer would this feature work with, i.e. how could I test it?)
I'm planning to also add support to the TurboVNC viewer, and in a way which will automatically use ssh
+ socat
to connect to remote hosts. (The advantage over TCP would be that you could start the server with -securitytypes none
and rely on the access permissions of the socket for security. Then all the authorization + encryption would be done by ssh.)
If I do that, should I add that to this pull request or should I create a separate pull request?
Interesting. Now that I understand the purpose, I was able to make it work by manipulating the VNC_TUNNEL_CMD
environment variable on the client.
/opt/TurboVNC/bin/vncserver -rfbunixpath /tmp/turbovnc.{user}.sock -securitytypes none
VNC_TUNNEL_CMD='/usr/bin/ssh -f -L localhost:%L:/tmp/turbovnc.{user}.sock %H (echo %R; sleep 20)' /opt/TurboVNC/bin/vncviewer -tunnel -extssh {host}:1
(NOTE: The display number isn't actually used in this case, so it can be any integer.)
How would you propose to integrate this into the viewer in a more user-friendly way? My thought was to use the existing {host}::{port}
syntax but allow a Unix domain socket path to be specified instead of a port, with the understanding that that would only work if SSH tunneling was enabled with an external SSH client. That would change the viewer command line to:
/opt/TurboVNC/bin/vncviewer -tunnel -extssh {host}::/tmp/turbovnc.{user}.sock
Then it would be a simple matter of modifying Tunnel.java so that it detects whether the port refers to a Unix domain socket or a TCP port and sets the SSH command string accordingly. But I'm open to other ideas.
One other thing I noticed is that the server doesn't remove the Unix domain socket when you kill the associated TurboVNC session. That needs to be fixed. If the system library can't be instructed to remove the socket automatically, then probably the vncserver script needs to detect the -rfbunixpath
argument and store the pathname of the specified socket in an auxiliary file under ~/.vnc (for instance, ~/.vnc/{host}:{display}.sockpath.)
Once we agree as to the viewer modifications, then those should be added to this PR.
Also, please add your attribution to the copyright headers of any modified files that have an existing copyright header.
I already started writing an implementation, see https://github.com/steffen-kiess/turbovnc/commit/df95f09dcdb3b5eb5710939115035afb36e1c00b (not yet ready for merging).
How would you propose to integrate this into the viewer in a more user-friendly way? My thought was to use the existing
{host}::{port}
syntax but allow a Unix domain socket path to be specified instead of a port, with the understanding that that would only work if SSH tunneling was enabled with an external SSH client. That would change the viewer command line to:
I've been using a host:/path/to/socket
syntax, but obviously host::/path/to/socket
would also work.
Then it would be a simple matter of modifying Tunnel.java so that it detects whether the port refers to a Unix domain socket or a TCP port and sets the SSH command string accordingly. But I'm open to other ideas.
In my implementation I'm calling ssh host "socat stdio unix-connect:...
instead of setting up a tunnel. The disadvantage with the tunnel is that this will expose the connection as a TCP socket, at least to other users on the same host. If there is some authentication afterwards, this is not that much of a problem, but the big advantage of unix domain sockets is that you can disable authentication, and then anyone on the local host could connect to the server.
Another alternative on Linux would be to use a local unix domain socket instead of a TCP port, but this would not on Windows, and I think the configuration Windows Client <-> Linux Server is rather common.
socat
is needed because ssh -W
does not work with unix domain sockets.
Also, I'm currently replacing %d, %i and %u with the home directory / numeric user ID / user name on the remote host, so you can e.g. use hostname:/run/user/%i/vnc.socket
to connect to a socket in the /run/user folder.
One other thing I noticed is that the server doesn't remove the Unix domain socket when you kill the associated TurboVNC session. That needs to be fixed. If the system library can't be instructed to remove the socket automatically, then probably the vncserver script needs to detect the
-rfbunixpath
argument and store the pathname of the specified socket in an auxiliary file under ~/.vnc (for instance, ~/.vnc/{host}:{display}.sockpath.)
I don't think it's possible to remove the socket automatically. I can add code which will remove the socket in ddxGiveUp()
(which I think should be called when the server quits). Should I also add code to the vncserver
script to clean up the socket if Xvnc
crashed or was killed using SIGKILL?
Once we agree as to the viewer modifications, then those should be added to this PR.
Ok.
Also, please add your attribution to the copyright headers of any modified files that have an existing copyright header.
Ok.
In my implementation I'm calling
ssh host "socat stdio unix-connect:...
instead of setting up a tunnel. The disadvantage with the tunnel is that this will expose the connection as a TCP socket, at least to other users on the same host. If there is some authentication afterwards, this is not that much of a problem, but the big advantage of unix domain sockets is that you can disable authentication, and then anyone on the local host could connect to the server.Another alternative on Linux would be to use a local unix domain socket instead of a TCP port, but this would not on Windows, and I think the configuration Windows Client <-> Linux Server is rather common.
As you pointed out, that's why VNC authentication exists. :) The TurboVNC Session Manager, as an example, works around the problem by using SSH tunneling along with one-time password authentication. It uses the built-in SSH client in the TurboVNC Viewer to remotely start a TurboVNC session and generate a new OTP for the session. The OTP is communicated to the client through SSH-tunneled stdio, then the TurboVNC Viewer enables port forwarding on the SSH session and sends the OTP back to the host through the SSH-tunneled RFB protocol. This works around the issue of exposing the client end of the SSH tunnel (not to mention the server-side RFB port), and it does so without requiring the user to enter any credentials (which seems to be the only reason why one would want to disable VNC authentication.)
Playing devil's advocate, disabling VNC authentication and using a TCP port as the client end of an SSH tunnel would only be risky if the client were a multi-user machine in which other users could log in via SSH while someone else was logged in locally. That would seemingly exclude Windows clients and almost all Mac clients (and Mac clients support Unix domain sockets anyhow.)
For the sake of simplicity, let's refer to the two approaches as follows:
Use socat
in order to forward the RFB protocol from the remote Unix domain socket through SSH-tunneled stdio.
socat
and SSH, which makes the feature somewhat inflexible. I like symmetric features, and it would be more symmetric for the viewer to optionally connect to a Unix domain socket rather than a TCP port, on systems that support Unix domain sockets. Symmetric features are easier to document and more intuitive for users.Use SSH port forwarding in order to forward the RFB protocol from the remote Unix domain socket to a local TCP port or a local Unix domain socket (Un*x/Mac clients only.)
socat
to connect to a TurboVNC session listening on a Unix domain socket on the same machine (potentially useful with containers.)
In TurboVNC 3.0, the TurboVNC Session Manager will be the preferred workflow, and that workflow solves the same problem that the proposed Unix domain socket workflow solves. Thus, I would prefer to minimize the disruption introduced by the latter, since it will inevitably be an alternative workflow and not a common use case. It's easier to make a case for Unix domain sockets if they can be used to solve other problems (simplifying container-based TurboVNC workflows, for instance) rather than just eliminating security issues introduced by non-standard TurboVNC usage (SSH tunneling without VNC authentication.)
Honestly, the only thing at the moment that would steer me toward Approach 1 is the potential for using it with JSch. Otherwise, Approach 2 appeals to me much more, since it extends the existing capabilities of the viewer in a more symmetric (== intuitive, easy to document) way. I wonder aloud whether it is possible to modify JSch in order to use Unix domain sockets on one or both ends of the tunnel.
If any of my assumptions are incorrect, then please correct them. I am still open to both approaches. Approach 1 just seems like it would be harder to support.
I don't think it's possible to remove the socket automatically. I can add code which will remove the socket in
ddxGiveUp()
(which I think should be called when the server quits). Should I also add code to thevncserver
script to clean up the socket ifXvnc
crashed or was killed using SIGKILL?
Probably yes. vncserver
does that for the X11 Unix domain socket, so I think it should also do that for the RFB Unix domain socket.
It means that the TurboVNC Viewer could be used without SSH or
socat
to connect to a TurboVNC session listening on a Unix domain socket on the same machine (potentially useful with containers.)
Connecting to a local Unix domain socket without socat
(or SSH) would require some native glue (similar to unix/vncviewer/usocket.c
, or just reusing that), and this glue (or com.jcraft.jsch.agentproxy.usocket.USocketFactory
) will not fit directly into the current code because the code is based on java.com.turbovnc.network.Socket
, but com.jcraft.jsch.agentproxy.usocket.USocketFactory
uses java.net.Socket
, so I think there still will be some disruption to the code for this use case.
In TurboVNC 3.0, the TurboVNC Session Manager will be the preferred workflow, and that workflow solves the same problem that the proposed Unix domain socket workflow solves. Thus, I would prefer to minimize the disruption introduced by the latter, since it will inevitably be an alternative workflow and not a common use case. It's easier to make a case for Unix domain sockets if they can be used to solve other problems (simplifying container-based TurboVNC workflows, for instance) rather than just eliminating security issues introduced by non-standard TurboVNC usage (SSH tunneling without VNC authentication.)
One nice thing about Approach 1 is that it does not rely on the server being TurboVNC, e.g. qemu/libvirt
can expose a virtual machine as an VNC Unix domain socket without password. For example I was able to connect to a virtual machine on another host using:
build/bin/vncviewer somehost:/var/lib/libvirt/qemu/domain-17-win10/vnc.sock
(Approach 2 again has the risk of exposing the socket, and if the server is not TurboVNC it will not implement the OTP-based workflow of the TurboVNC Session Manager. In most cases the client probably will be used only by one person at a time or can use Unix domain socket, but I think I still would prefer Approach 1.)
Honestly, the only thing at the moment that would steer me toward Approach 1 is the potential for using it with JSch. Otherwise, Approach 2 appeals to me much more, since it extends the existing capabilities of the viewer in a more symmetric (== intuitive, easy to document) way. I wonder aloud whether it is possible to modify JSch in order to use Unix domain sockets on one or both ends of the tunnel.
I think Approach 1 should be relatively straightforward with JSch (and I can try to implement that).
Regarding the problem that Approach 1 will require socat
to be installed, if the server directory is known (which it already has to be in order to use the Session Manager mode) an alternative would be to provide a socat
-like binary with the server (called e.g. vncunixconnect
). This binary would always be available (because it would be introduced in the same TurboVNC version as the Unix domain socket support). Obviously this binary would not fully reimplement socat
, only connecting stdio to the socket which is passed as an argument. This binary could also be used to connect to local sockets (the advantage over using glue code to create a Unix socket would be that this shares more code with the SSH-related code).
All in all I'd strongly prefer approach 1 (mostly because of the security issues with the TCP socket, and also because it avoids race conditions like that another process might use the port between findFreeTcpPort()
and SSH binding to the port). However if you will not accept an implementation based on approach 1, I will attempt to do an implementation of approach 2.
Connecting to a local Unix domain socket without
socat
(or SSH) would require some native glue (similar tounix/vncviewer/usocket.c
, or just reusing that), and this glue (orcom.jcraft.jsch.agentproxy.usocket.USocketFactory
) will not fit directly into the current code because the code is based onjava.com.turbovnc.network.Socket
, butcom.jcraft.jsch.agentproxy.usocket.USocketFactory
usesjava.net.Socket
, so I think there still will be some disruption to the code for this use case.
Yes, I had forgotten about the lack of Unix domain socket support in Java. That would seemingly tilt the scales in favor of Approach 1.
In TurboVNC 3.0, the TurboVNC Session Manager will be the preferred workflow ...
One nice thing about Approach 1 is that it does not rely on the server being TurboVNC, e.g.
qemu/libvirt
can expose a virtual machine as an VNC Unix domain socket without password. For example I was able to connect to a virtual machine on another host using:
build/bin/vncviewer somehost:/var/lib/libvirt/qemu/domain-17-win10/vnc.sock
That would be true of Approach 2 as well.
(Approach 2 again has the risk of exposing the socket, and if the server is not TurboVNC it will not implement the OTP-based workflow of the TurboVNC Session Manager. In most cases the client probably will be used only by one person at a time or can use Unix domain socket, but I think I still would prefer Approach 1.)
At the end of the day, my ability to develop TurboVNC hinges on my ability to monetize my work, and that necessitates treating TurboVNC as a self-contained enterprise product. My only revenue streams are support and funded development, so my business model requires driving as many enterprise users as possible to TurboVNC. That means that I have to compete with corporate closed-source or mixed-source solutions such as RealVNC or ThinLinc or NX in terms of usability and performance. In order to do that, I have to focus on features that give me the best "bang for the buck." I want TurboVNC to be a "Swiss Army Knife", but I also can't spend too much unpaid labor implementing features that are orthogonal to the recommended workflow.
Unix domain socket support is nice to have from a technical point of view, but from a business point of view, it addresses a problem that TurboVNC 3.0 already addresses in a different way. Thus, the feature really needs to be something that I can integrate with minimal effort. Let's be honest about the fact that this feature's primary purpose is to address security issues introduced by disabling RFB authentication. That's a pretty weak argument, and introducing a feature just because it benefits someone else's VNC solution is an even weaker argument. What I'm looking for is a compelling case for why this benefits TurboVNC specifically-- what issue does it address that isn't already addressed in some other way?
To be clear, I have no problem integrating the server changes. I'm referring mainly to the viewer changes, which will require a non-trivial amount of testing and documentation.
Regarding the problem that Approach 1 will require socat to be installed, if the server directory is known (which it already has to be in order to use the Session Manager mode) an alternative would be to provide a socat-like binary with the server (called e.g. vncunixconnect). This binary would always be available (because it would be introduced in the same TurboVNC version as the Unix domain socket support). Obviously this binary would not fully reimplement socat, only connecting stdio to the socket which is passed as an argument. This binary could also be used to connect to local sockets (the advantage over using glue code to create a Unix socket would be that this shares more code with the SSH-related code).
You argued that Unix domain socket support is better than the TurboVNC Session Manager because it's potentially more compatible with other VNC servers, but now you're arguing in favor of a TurboVNC-specific SSH command-line interface. That's exactly what the TurboVNC Session Manager is. :)
All in all I'd strongly prefer approach 1 (mostly because of the security issues with the TCP socket, and also because it avoids race conditions like that another process might use the port between
findFreeTcpPort()
and SSH binding to the port). However if you will not accept an implementation based on approach 1, I will attempt to do an implementation of approach 2.
If such a race condition were a significant risk, then it would be a significant risk with the existing TCP-based SSH tunneling solution as well. If the existing solution were racy, then my time would be better spent fixing that bug than working on this enhancement. In other words, if this argument were true, then it would not work in your favor. However, historically findFreeTcpPort()
has not proven problematic.
All specious arguments aside, I will accept an implementation of Approach 1 if you can address the following:
VNC_TUNNEL_CMD
and VNC_VIA_CMD
environment variables. Ideally, the default would use socat
when a Unix domain socket is specified, but users could alter it to use the built-in Unix domain socket forwarding feature in OpenSSH 6.7 or later. %L
would be optional if using a Unix domain socket, and %R
would refer to the socket path on the TurboVNC host. I think that %G
and %H
can have the same meaning. vncviewer -?
output as well as in the User's Guide (specifically, the section describing the behavior of VNC_VIA_CMD
and VNC_TUNNEL_CMD
.)That will minimize the amount of effort required on my part, although I will still have to do a lot of code review, cleanup, and testing.
So if either
VNC_TUNNEL_CMD
orVNC_VIA_CMD
is set it should not use JSch and not usesocat
but rely onVNC_TUNNEL_CMD
orVNC_VIA_CMD
to set up the tunnel (to a local TCP port, and then everything is treated the same as a non-Unix connection, basically approach 2)? Or should I use a local unix socket for forwarding? (This would then require eithersocat
locally or some native glue code.)
Good question. The current default for those variables is:
VNC_TUNNEL_CMD = {ssh} -f -L %L:localhost:%R %H sleep 20
VNC_VIA_CMD = {ssh} -f -L %L:%H:%R %G sleep 20
where
%%
= A literal “%”
%G
= gateway host name or IP address
%H
= remote VNC host name or IP address (if using the -via
option, then this is specified from the point of view of the gateway)
%L
= local TCP port number
%R
= remote TCP port number
The SSH user name, if specified, is prepended to %H
in VNC_TUNNEL_CMD
and %G
in VNC_VIA_CMD
.
If the user specifies a Unix domain socket as the display name, then we know that the default should be changed to:
VNC_TUNNEL_CMD = {ssh} -- %H exec socat stdio unix-connect:%R
VNC_VIA_CMD = {ssh} -J %G -- %H exec socat stdio unix-connect:%R
where %R
now refers to the Unix domain socket path on the TurboVNC host. (Correct me if I got any of that wrong.)
If the user specifies a Unix domain socket as the display name, then I think it would make sense to assume that TCP should be used if %L
is specified in VNC_TUNNEL_CMD
/VNC_VIA_CMD
and stdio should be used if %L
is not specified.
When using JSch, only the default VNC_TUNNEL_CMD
/VNC_VIA_CMD
behavior is implemented, so it is consistent if only the socat
approach is supported with JSch.
Other notes:
-ax
should also be specified in the existing tunneling feature, but otherwise, I would prefer that it remain unspecified.-via
and -tunnel
options should always enable SSH. However, if neither -via
nor -tunnel
is specified, then the viewer should detect whether a Unix domain socket is specified as the display name, and it should use stdio without SSH tunneling in that case. Unless I am mistaken, that will allow users to connect to a local Unix domain socket by doing something like socat stdio unix-connect:{socket} | /opt/TurboVNC/bin/vncviewer localhost:/foo
. Actually, it may make more sense to enable stdio communication when a special server name, such as -
, is specified. That would be consistent with the piping syntax of other Unix programs.I also note that Java 16 includes Unix domain socket support, but the LTS release containing that feature (Java 17) won't be available until September. Thus, TurboVNC 3.0 will likely ship with Java 11.
Ok, I've added the implementation of the support in the viewer to the PR.
Btw, it seems you've edited my comment instead of replying to it :-)
I would prefer to keep things as similar as possible to the existing tunneling feature. I am open to the argument that
-ax
should also be specified in the existing tunneling feature, but otherwise, I would prefer that it remain unspecified.
I think having -ax
is good because neither agent nor X11 forwarding is needed for sleep 20
or socat
, and it might cause (rather minor) problems like a warning like this: Warning: No xauth data; using fake authentication data for X11 forwarding.
I've added -ax
to all SSH default command lines, but I can also remove it again if you would prefer that.
If the user specifies a Unix domain socket as the display name, then we know that the default should be changed to:
VNC_TUNNEL_CMD = {ssh} -- %H exec socat stdio unix-connect:%R VNC_VIA_CMD = {ssh} -J %G -- %H exec socat stdio unix-connect:%R
where
%R
now refers to the Unix domain socket path on the TurboVNC host. (Correct me if I got any of that wrong.)If the user specifies a Unix domain socket as the display name, then I think it would make sense to assume that TCP should be used if
%L
is specified inVNC_TUNNEL_CMD
/VNC_VIA_CMD
and stdio should be used if%L
is not specified.
I've implemented it that way (with -ax
in the defaults).
The non-socat support can be used using:
VNC_TUNNEL_CMD='ssh -ax -f -L %L:%R %H sleep 20'
VNC_VIA_CMD='ssh -ax -J %G -f -L %L:%R %H sleep 20'
I don't think that it's correct for the SSH tunneling/Unix domain socket implementation to behave differently for localhost. The
-via
and-tunnel
options should always enable SSH. However, if neither-via
nor-tunnel
is specified, then the viewer should detect whether a Unix domain socket is specified as the display name, and it should use stdio without SSH tunneling in that case.
The way I implemented it now is that -via
or -tunnel
always use SSH. However, if the hostname is not localhost
or empty and -via
is not set, -tunnel
is implied. This is needed because otherwise there is no way to connect to a Unix domain socket on a remote host (unlike TCP). It neither -via
nor -tunnel
is given and the hostname is localhost
, socat
will be started locally. (This probably can be replaced by directly using Unix domain socket in Java 16/17).
Unless I am mistaken, that will allow users to connect to a local Unix domain socket by doing something like
socat stdio unix-connect:{socket} | /opt/TurboVNC/bin/vncviewer localhost:/foo
. Actually, it may make more sense to enable stdio communication when a special server name, such as-
, is specified. That would be consistent with the piping syntax of other Unix programs.
You want to allow calling vncviewer
with an already open connection at stdin/stdout? Note that the socat command would have to be socat unix-connect:{socket} exec:"/opt/TurboVNC/bin/vncviewer -"
because you need a pipe in both directions. Implementing something like this would probably require some changes, e.g. currently vncviewer
sometimes prints log messages to stdout (when I start it, I get the message libjawt.so path: /usr/lib/jvm/java-11-openjdk-amd64/lib
on stdout). Also things like reconnecting would not be possible (because vncviewer
cannot get a new connection).
I will be traveling for the next two weeks, so I'll look at this when I get a chance. Sorry for editing your comment instead of replying. Oops.
@steffen-kiess Oops. My apologies. I didn't realize that this PR would automatically close when I shuffled branches for the 3.0 release. Can you please rebase this against main, which is now the TurboVNC 3.0 code base? I didn't have time to integrate it for 3.0 but will try my best for 3.1.
Add the '-rfbunixpath' and '-rfbunixmode' options which will enable listening on unix domain sockets.
The options are similar to the same options in TigerVNC, however specifying '-rfbunixpath' will always disable listening on TCP and '-rfbunixmode' is always parsed as an octal number.