Open inducer opened 12 years ago
You're right that you can't use a ProxyCommand, but the reason is even simpler. Mosh doesn't try to resolve your hostnames -- it lets SSH do that. You can have a host alias in your SSH configuration and that will work fine. However, mosh
itself uses the ProxyCommand feature (because it needs to know the IP address it's connecting to, in case there are multiple A records).
If you can find a way to start up the mosh-server
on the remote end, and you can give the mosh-client
an IP address to connect to, that's really all you need.
My use case is far more complex than inducer's, but if mosh supported ProxyCommand, it would make mosh a drop in solution.
Here's a snippet of my ssh config
host home ProxyCommand connect -H proxy:3128 %h 443
host home-desktop ProxyCommand ssh -q home nc %h %p -w 30 2> /dev/null
As you can see, I'm using ssh over a https proxy to form a tunnel to my home gateway, and then using that tunnel to connect to my home desktop machine.
The https proxy server that we have at my day job is unbelievably unreliable, disconnecting several times an hour, but it's the only way to punch out of the building.
I use autossh to keep a psuedo-persistent connection open, but still have to deal with my existing session being interrupted.
If mosh could be set up to pass its data through the ssh session used to start it (or a new one created with the same parameters), then mosh would be a viable option for me when at work.
Thanks
+1 on this, I'd greatly appreciate ProxyCommand support.
My setup is simpler tho, support for something like this:
Host foo
Host bar
ProxyCommand ssh foo nc %h %p
would be enough for my needs - and make mosh a drop in solution as well.
+1
If you need ProxyCommand to connect a host but it can be reached from the internet via UDP, you could use a script like this. It starts mosh-server on the remote machine (using SSH and the ProxyCommand from .ssh/config), parses the output and then connects using mosh-client to the machine directly.
I guess you could use mosh on the hop if that the only place that has access to farside.
host hop host farside ProxyCommand mosh -- hop -W farsideaddr:22
(-W is sometimes better option than netcat.)
Instead of using ProxyCommand to determine the address mosh could use the LD_PRELOAD environment variable, overwrite/proxy the connect system call method (to the SSH server) and use that resolution result to determine/resume the specific connection of hop.
@grooverdan: mosh cannot be used as a ProxyCommand. That wouldn’t make any sense. The job of a ProxyCommand is passing on the raw, encrypted SSH stream; mosh doesn’t work that way.
You can, however, run mosh hop ssh farside
, ssh hop mosh farside
, or even mosh hop mosh farside
, to use mosh
for one or both legs of the connection. This obviously requires that you trust hop
with the credentials you use to access farside
.
I make quite heavy use of bastions, where the destination (farside) is not guaranteed to have mosh installed.
$ mosh hop ssh farside , seems to provide the benefits of mosh - but in a generic environment. I wondered if anyone has integrated this into their ssh config, to make the process simpler.
+1 for support of something like this, if its possible. I ssh into a bunch of VMs (farside) unaccessible from the Internet via a login node (hop) which is, but my situation is the opposite of @Daviey's. I have root on the VMs and so I can install mosh there, but I can't install it on the hop. Is there any way to get the benefits of mosh in this situation?
+1 for this, just figured out I can't use mosh from work to my server due to mosh ignoring ProxyCommand from my .ssh/config.
I came up with a potential solution that avoids ProxyCommand, but has its own downsides. There is a LocalCommand directive that causes ssh to execute a command on the localhost when the connection is established, with the same substitution rules (e.g. %h => remote hostname) as ProxyCommand. Of course, any existing LocalCommand directive will be clobbered, but I think that one is less likely to be used.
The downside is that LocalCommand requires the PermitLocalCommand=true option to be set, too, and that one enables local command execution via the !command escape sequence in ssh(1). So this has security implications -- can an exploit on the remote host use mosh-server to run arbitrary commands on the connecting host? I did a quick search, but couldn't find any history on the LocalCommand and PermitLocalCommand options, but it smells like something that was originally permitted by default, but locked down to avoid "surprise" shell execution vectors.
Anyway, my changes to master are https://github.com/keithw/mosh/compare/master...akalin:avoid-proxy-command , and my changes to the stable version (which I tested and works swimmingly) are https://github.com/keithw/mosh/compare/mosh-stable...akalin:avoid-proxy-command-stable .
Oh, and I guess another upside to LocalCommand is that it avoids the need for the "fake proxy" used in ProxyCommand -- one fewer process and one fewer source of buffering to worry about!
Another possible solution that avoids both LocalCommand and ProxyCommand, but is more brittle -- run ssh with -v, and parse the verbose output to look for the line:
debug1: Connecting to x.x.x.x [x.x.x.x] port 22.
@akalin: Your LocalCommand
proposal will cause the hostname to be resolved to an IP address separately by ssh
and by mosh
. They could get different results, either in the case of a DNS round-robin setup, or if there are IPv6 addresses involved.
Your -v
proposal doesn’t solve the problem: if a user passes their own ProxyCommand
, then ssh
does not print such a debug message.
For some users (such as @akalin) this is a worthwhile tradeoff, obviously. We might want to support both methods.
Once OpenSSH has their library API available, using that and then examining the socket it creates may be a cleaner solution to all of this. That partly depends on how elegant that API turns out to be and how much code is needed to make it all go. There are various ssh libraries already, of course, but they vary in support for ssh_config
and its options, and in quality of implementation.
@andersk ah, you're right on both counts. But yeah, as @cgull said, maybe it should be an option, until a cleaner solution can be found.
I have a vague idea of getting the pid of the ssh process and using lsof/netstat to find the ip address, but that adds extra dependencies and is more complicated.
Another thought: if you resolve a hostname and you get a single A or AAAA record with a long TTL, and no intermediate CNAME, you could heuristically decide that LocalCommand is safe, and you might be right most of the time.
We rely on ssh
to map the provided hostname to an actual hostname (which may be changed by ssh_config
), so we don’t get to make that choice until after we have already decided to use ProxyCommand
or LocalCommand
.
I was looking at this issue because I have the same needs. Indeed it's ssh who make the translation from the name specified on the command line to the real hostname/ip but still there is a risk that the fake proxy will resolve the name to a different one than the one that was resolved by ssh.
@ekacnet, no, there’s no such risk: when using a ProxyCommand
, ssh
only performs the alias translations specified by ssh_config
and does not do any DNS resolution itself.
@andersk I know that ssh_config is not doing the DNS resolution but ssh is doing a DNS resolution from myhost.mydomain.com to IP w.x.y.z. And the proxy command is also doing a resolution.
The thing I missed is that when using proxycommand, the ssh command is reading everything from the stdout of the proxycommand.
@ekacnet I understand your claim, and I’m telling you that it’s incorrect. ssh
does not do a DNS resolution from myhost.mydomain.com to w.x.y.z when using a ProxyCommand
.
In our corp environment we use a ssh ProxyCommand handler to handle certain security handshake aspects. Would be great if I could just drop that into mosh also.
Please try Mosh master if you can; I've recently added an experimental change that will help some cases (though if firewalls or NAT are involved, probably not).
Thanks @cgull . Can you tell us what the experimental change is, and how it works?
The change is to send a snippet of shell commands to the server that echo the SSH_CONNECTION
variable to the client, and adding some code in the Mosh perl script to interpret that info. However, I've decided that doing that automatically/always is a bit risky and difficult to support, and I'm working on moving it to an option, along with some other new features. Coming soon to a Github repo near you.
I'm using Corkscrew to CNTLM to the web. It's messed up, but that's the nature of corporate workpsaces
How about providing some new command line behaviour:
--ip=ADDR
: IP address (v4 or v6) supplied, don't use ProxyCommand
.--host=FOO
: Resolve FOO to determine remote IP, don't use ProxyCommand
.--resolve
: Resolve host to determine remote IP, don't use ProxyCommand
.ProxyCommand
based implementation.All resolving should be done via getaddrinfo
syscall, which honours /etc/hosts
. I'd like if --resolve
eventually becomes the default, perhaps a --no-resolve
option might be introduced that preserves the current behaviour.
I just added an --experimental-remote-ip
option to Mosh master, which adds two new ways for Mosh to handle address resolution without using ProxyCommand
:
ssh
or mosh
They'll each handle some use cases, but all three methods have limitations, and I'm sure they won't help some people. Additionally both of these new methods make it more possible to use non-OpenSSH clients (dbclient
works but complains about options it doesn't support). Please try this and let us know how it works or doesn't for you.
Adding options to interrogate the remote host to determine it's IP is good, but in reality what you are trying to achieve is NAT traversal. Have a read of https://en.wikipedia.org/wiki/NAT_traversal and consider what can be done using ICE/STUN/etc to establish a connection. Mosh should not be inventing new NAT traversal techniques.
Actually, no. We're quite well aware of the issues with server-side NAT traversal, and these changes aren't intended to help with that. They're intended to help with the difficulty of obtaining the remote address reliably from the ssh connection, so that ssh and Mosh traffic go to the same server.
I'm working on a moshd
to help with issues with port usage on servers, and this shows some promise of being extensible to help with server-side NAT problems.
Personally, I'm not a big believer in NAT; I believe VPNs or IPv6 are generally better solutions. But there's certainly many potential use cases for Mosh that don't involve either.
If you run a full ICE RFC 5245 session, or even better ICE-Trickle https://tools.ietf.org/html/draft-ietf-ice-trickle-01, you'd be able to reliably enumerate all local and remote ip addresses, as well as respond to changes in them.
Server-side NAT traversal is tracked as #48, not here.
This bug pretty much makes Mosh useless for me and my corporate colleagues who are trying to connect out from behind a proxy.
@craftyguy Do you have an unusual setup that requires proxied SSH connections but allows unproxied UDP connections? If so, you’ve found the right issue and the new --experimental-remote-ip=local
or --experimental-remote-ip=remote
options in Mosh 1.2.6 may work for you.
If not, then this issue is not what makes Mosh useless for you. This is not a catch-all issue for connecting from behind arbitrary proxy servers. Mosh requires UDP, and there’s no getting around that. (There’s some discussion about TCP support over at #13, but it’s unlikely to be implemented any time soon.)
I use assh, which adds ProxyCommand to every connection, and I got it fully working with the following steps:
#!/usr/bin/env bash
mosh --experimental-remote-ip=remote "$@"
status=$?
if [ $status -eq 5 ] || [ $status -eq 127 ] || [ $status -eq 10 ]; then
ssh "$@"
fi
alias ssh "/usr/local/bin/mosh_fallback"
After that, it worked flawlessly on all my use cases, automatically falling back to ssh when mosh isn't installed on the remote.
I use assh, which adds ProxyCommand to every connection, and I got it fully working with the following steps:
* Create /usr/local/bin/mosh_fallback
#!/usr/bin/env bash mosh --experimental-remote-ip=remote "$@" status=$? if [ $status -eq 5 ] || [ $status -eq 127 ] || [ $status -eq 10 ]; then ssh "$@" fi
* Just alias ssh with `alias ssh "/usr/local/bin/mosh_fallback"`
After that, it worked flawlessly on all my use cases, automatically falling back to ssh when mosh isn't installed on the remote.
I can use mosh trying to connect my server, however the it says
/usr/bin/mosh: Using remote IP address 172.25.23.30 from $SSH_CONNECTION for hostname quilava-proxy
mosh did not make a successful connection to 172.25.23.30:60001.
Please verify that UDP port 60001 is not firewalled and can reach the server.
(By default, mosh uses a UDP port between 60000 and 61000. The -p option
selects a specific UDP port number.)
[mosh is exiting.]
172.25.23.30
is my internal IP address,
here is my ssh_config
Host some-server-behind-proxy
Hostname 172.25.23.30
ProxyCommand nc -x localhost:1080 %h %p
will it be a problem with my nc
command?
I'm getting similar issues when I set RemoteCommand in .ssh/config. Is this the same issue, or should it be tracked elsewhere?
@smaslennikov It’s similar but even more fundamental. Mosh uses the remote command to launch the remote mosh-server
. (It uses the command argument instead of the RemoteCommand
option, but SSH doesn’t allow you to use both, so SSH errors “Cannot execute command-line and remote command.”)
If you want to specify the command to launch inside the Mosh session, simply provide it after --
on the Mosh command line: mosh HOSTNAME -- screen -dr
.
@andersk - I have hacked up your --experimental-remote-ip option for my own use case - please see attached patch (sorry, I'm a dinosaur who doesn't understand git)
mosh.diff.txt
Basically allows setting a hostname/IP which is only used for the target of the UDP packets. This way I can use ProxyJump
in .ssh/config
, and then rush mosh like this:
mosh --bind-server=<target LAN IP address>|any --experimental-remote-ip=gateway:<firewall external hostname> -p <port range opened on firewall for target host> <entry in .ssh/config with ProxyJump option>
I guess it's a bodge, but it works for me. Setting port ranges up etc. would be tedious where there is a multitude of hosts on the LAN, but I only have one or two hosts I'm likely to want to jump to like this.
@andersk - I have hacked up your --experimental-remote-ip option for my own use case - please see attached patch (sorry, I'm a dinosaur who doesn't understand git) mosh.diff.txt Basically allows setting a hostname/IP which is only used for the target of the UDP packets. This way I can use
ProxyJump
in .ssh/config
, and then rush mosh like this:mosh --bind-server=<target LAN IP address>|any --experimental-remote-ip=gateway:<firewall external hostname> -p <port range opened on firewall for target host> <entry in .ssh/config with ProxyJump option>
I guess it's a bodge, but it works for me. Setting port ranges up etc. would be tedious where there is a multitude of hosts on the LAN, but I only have one or two hosts I'm likely to want to jump to like this.
looks like you figured it out just fine. If you apply this diff on a fork, you can submit it as a PR to the repo: https://help.github.com/en/github/getting-started-with-github/fork-a-repo
What happened to this?
EDIT: found it ... switch is called --experimental-remote-ip={proxy|local|remote}
Can be closed I guess? Is it still experimental?
It seems despite the above patch file, seamlessly switching from SSH to mosh by just saying mosh some-server
which has a ProxyCommand
in ~/.ssh/config, still doesn't work. Are there any plans for fixing that? Is it even conceptually possible to ever have it work in a transparent fashion? My guess is there's a lot of people who can only / want to access a lot of hosts via ProxyCommand
.
The only thing that stops --experimental-remote-ip=local
from working with ProxyCommands in ~/.ssh/config is the line $userhost = "$user$ip";
in /usr/bin/mosh around line 340. Comment that line out and ssh will be called with the original command-line argument. The mosh command will still resolve the host to an IP address locally, so you have to ensure that for wherever your ssh starts mosh-server
, that the UDP packets from mosh-client
get there too, e.g. NAT at the firewall. Any chance to have the remote-ip 'local' option without the change to $userhost?
The reason Mosh uses "$user$ip" there is to ensure that both ssh and mosh talk to the same address, in the case where the hostname has multiple A/AAAA records or round-robin config or dynamically changing results. So, your proposed fix breaks a significant use case.
...but Mosh could certainly add another variation on --experimental-remote-ip
to handle this particular use case.
mosh
could also use ssh -G
to automatically detect an already-existing ProxyCommand
and change its strategy. That starts to get a bit complex though, and could be surprising and confusing. This idea also applies to RemoteCommand
discussed here and #1175.
I know I'm late but I'd like to share a small script made recently to use ssh config ProxyCommand with mosh. It relies on socat to relay mosh data via UDP: your computer <-> relay <-> mosh server
script is here: https://github.com/oliv5/profile/blob/master/bin/profile/mosh-proxy.sh
Requirements:
What is does:
Cons:
That's pretty cool and definitely worth a shot on a private server. Thank you for sharing!
The NAT/firewall-busting problems of issue #48 notwithstanding, mosh also has issues with the "ProxyCommand" style of connecting to machines behind NATs. For specificity, here's an example from my
.ssh/config
:This is convenient, because it lets me type
ssh mauler
, and ssh will connect to linax1.cims.nyu.edu, and the 'outer' ssh run will then run over the tunnel thus set up. I understand this is hard for mosh to imitate, given UDP connections and what not, so let's assume that there's a hole poked into the firewall at port 60000 UDP on linax1.cims.nyu.edu that lets me get to mauler.The main problem then is that mosh tries to resolve
mauler
locally to find the host to connect to, which of course doesn't succeed. I'd argue that it could potentially be smarter about finding the public IP of the target host--when it starts, it is executing codes on both ends of the connection after all. Failing that, I'd like to hhave an option to specify which host is actually meant.