ronf / asyncssh

AsyncSSH is a Python package which provides an asynchronous client and server implementation of the SSHv2 protocol on top of the Python asyncio framework.
Eclipse Public License 2.0
1.56k stars 157 forks source link

how to configure asyncssh to update the known-hosts automatically #132

Closed slieberth closed 6 years ago

slieberth commented 6 years ago

Hello Ron, I encounter a problem, when I try to dockerize my provisioning tool, which uses asyncssh as underlying connection service. This is most likely a problem with the know-hosts file, as the ssh environment inside docker is created from scratch over and over ... and not all servers are in the known-hosts file

what is not working from beginning: docker(172.17.0.2)asyncssh-->host(10.10.20.150)--->host(10.10.20.157:2211)--->LXC(10.0.3.30:30) I get an error message:Connection lost: Disconnect Error: No trusted server host keys available 2018-03-17 08:41:59,842 : INFO : Opening SSH connection to 10.10.20.157, port 2211 2018-03-17 08:41:59,844 : INFO : [conn=1] Connection lost: Disconnect Error: No trusted server host keys available

after creating the known-host entry with: docker(172.17.0.2)ssh-->host(10.10.20.150)--->host(10.10.20.157:2211)--->LXC(10.0.3.30:30)

debug3: hostkeys_foreach: reading file "/root/.ssh/known_hosts" The authenticity of host '[10.10.20.157]:2211 ([10.10.20.157]:2211)' can't be established. ECDSA key fingerprint is SHA256:oBaUoGYTans4VS3D4wPkH49GtU9FTEyOwwCYg3fyLhE. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '[10.10.20.157]:2211' (ECDSA) to the list of known hosts.

after this step asyncssh connections are also working docker(172.17.0.2)asyncssh-->host(10.10.20.150)--->host(10.10.20.157:2211)--->LXC(10.0.3.30:30) 2018-03-17 09:18:20,327 : INFO : Opening SSH connection to 10.10.20.157, port 2211 2018-03-17 09:18:20,330 : INFO : [conn=0] Connection to 10.10.20.157, port 2211 succeeded 2018-03-17 09:18:20,330 : INFO : [conn=0] Local address: 172.17.0.2, port 55780 2018-03-17 09:18:20,333 : DEBUG : [conn=0] Requesting key exchange 2018-03-17 09:18:20,335 : DEBUG : [conn=0] Beginning key exchange 2018-03-17 09:18:20,341 : DEBUG : [conn=0] Completed key exchange 2018-03-17 09:18:20,343 : INFO : [conn=0] Beginning auth for user ubuntu 2018-03-17 09:18:20,356 : DEBUG : [conn=0] Trying public key auth with ssh-rsa-cert-v01@openssh.com key 2018-03-17 09:18:20,358 : DEBUG : [conn=0] Trying public key auth with ssh-rsa key 2018-03-17 09:18:20,359 : DEBUG : [conn=0] Trying password auth 2018-03-17 09:18:20,367 : INFO : [conn=0] Auth for user ubuntu succeeded 2018-03-17 09:18:20,367 : DEBUG : [conn=0, chan=0] Set write buffer limits: low-water=16384, high-water=65536 2018-03-17 09:18:20,367 : INFO : [conn=0, chan=0] Requesting new SSH session 2018-03-17 09:18:20,374 : DEBUG : [conn=0, chan=0] Terminal type: Dumb 2018-03-17 09:18:20,374 : DEBUG : [conn=0, chan=0] Terminal size: 200x24 2018-03-17 09:18:20,375 : INFO : [conn=0, chan=0] Interactive shell requested 2018-03-17 09:18:20,388 : DEBUG : set prompts to ['ubuntu@RtBrick1:\S*\$ ']

I would appreciate some advise, how I can configure asyncssh to update the known-hosts automatically

many thanks in advance and thanks for providing asyncssh, it is great!!!

greetings from berlin Stefan

ronf commented 6 years ago

Hi Stefan,

If I understand right, 10.10.20.157:2211 in this example was an existing SSH server, and you want a newly-created Docker container at 172.17.0.2 to be able to connect to it. I'm not quite sure how 10.10.20.150 or 10.0.3.30 fit in here, but that may not matter.

Does the host which the Docker container is running on already have a known_hosts entry for 10.10.20.157:2211? If so, you should be able to simply add that known_hosts file from the host into the Docker container when it is created. If you know only a few specific hosts are needed, you could create a custom known_hosts file with just those hosts in it and add that file into /root/.ssh/known_hosts (or whatever user's home directory you need this in).

You can do this by mounting the known_hosts file on the host as a volume when docker is started up, or you could use an ADD command to copy the file into the container when the container is created. You could also use "docker cp" after the container is created to copy it from the host.

All of these assume that the host keys you need are for established hosts and that you already have them on the machine creating the container. If you don't already have the key, it gets a lot harder, as automatically trusting the key you get back as you are doing in this example when you run the regular ssh client is a potential security hole -- it opens you up to man-in-the-middle attacks if you don't validate the fingerprint somehow before saying "yes" to adding the key. This is part of why AsyncSSH doesn't provide this functionality today. It gives you several different ways to provide a list of known hosts, but it requires you to vet them yourself before it will use them.

slieberth commented 6 years ago

Hi Ron, thanks for the explicit answer, appreciated,

I found a way to add/provision the required keys to the known-hosts file in the docker container via a specific runbook. after that other runbooks for asyncssh can be used. In aioRunbook syntax the step looks like:

- record:
    name: "local-shell add keys for 10.10.20.157 to known-hosts"
    method: local-shell
    commands: 
      - ssh-keyscan -t rsa,dsa 10.10.20.157 2>&1 >> /root/.ssh/known_hosts
      - ssh-keyscan -t rsa,dsa -p 2211 10.10.20.157 2>&1 >> /root/.ssh/known_hosts 

I have tested this successfully ...

thanks again, and have a nice weekend Stefan

ronf commented 6 years ago

While ssh-keyscan will work, that has the weakness I mention above of potentially opening you up to man-in-the-middle attacks. If you tightly control the network, this may be something you can decide isn't a concern, but best practice would be to know the public key of 10.10.20.157:2211 in advance and populate only THAT key (and any other public keys you need) explicitly from a known good copy stored in advance on the system creating the Docker image. That way, there's no chance for an attack to insert their own key in response to your running ssh-keyscan.

Also, I don't know if this set of commands would ever run more than once on a given Docker image, but if it did you'd end up with multiple copies of the host keys in the resulting known_hosts file.

slieberth commented 6 years ago

Hi Ron, I completely understand you concerns on the public internet, but ;-)

I have slightly amended the the script so that the first ssh-keygen builds the ssh-keygen from scratch.

currently the shh-keygen workaround works well in my VPN/Lab environment. I think for those private environments a specific knob in asyncSsh, which allows the modification of known-hosts file would be valuable. But as written - the workaround works for me - so this is only a minor request from my side ... :-)

greeting from Berlin and have a nice weekend Stefan

ronf commented 6 years ago

Yeah - I can see how this might not be a concern in your current environment. However, one of the real benefits with Docker is that it's very easy to move around where containers are run, often without even having to change the network numbering. So, what starts out as a completely local network today might not be one tomorrow. I tend to like building things from the beginning assuming the network is not trusted, or that the only trust needed is on the host system building the container and not the network used by the container after it is started.

That said, asyncssh does already provide everything you need to very easily create new key pairs or use existing public keys as known_hosts entries. There are multiple ways to do this, depending on how you want to access the keys.

If you need to create a new key for use as a host key for a server hosted by AsyncSSH, you can do so using asyncssh.generate_private_key(), specifying the key type and optional parameters specific to that key type. You can then call write_public_key() or append_public_key() on that key to write the corresponding public key out to a file, and that file can be directly used as a known_hosts file. You can even including a comment string in what's written out by specifying a 'comment' keyword argument on the generate_private_key() call or by calling set_comment() on the key before writing it out.

If you need to add some known hosts options in front of the key, you can use export_public_key() to generate the key as a byte string, and then write the options along with the exported key data to the known_hosts file you are creating.

If you are using AsyncSSH as a client and already have a public key file generated by another tool, there's no need to append it to something like .ssh/known_hosts. You can just provide the file name directly in the "known_hosts" keyword argument when creating your client, or provide a byte string containing the known host data if you have some other way to get the data and don't want to have to put it in a file.

You can also provide your own matching function or an already-matched collection of results of allowed and revoked keys and certificates. Take a look at http://asyncssh.readthedocs.io/en/latest/api.html#specifying-known-hosts for more details on how that works.

ronf commented 6 years ago

Hi Stefan,

I realized in re-reading this today that I didn't really address your point about dynamically creating containers with new keys. If this is all happening from a common host, you might be able to run ssh-keygen on the host itself and then inject the private keys into the new container when creating it, or alternately let the container do the ssh-keygen but then copy just the public key out to the host with "docker cp" once the container is created, making it available as known_hosts information to provide to other containers.

You can also use the generate_private_key() and write_public_key() functions I mention above on either the host or within new containers instead of using ssh-keygen, if that's more convenient. You don't even need to set up an async event loop to use these functions, as all the calls are synchronous.

slieberth commented 6 years ago

Hi Ron, sorry for the late reply, unfortunately I was to busy over the last days.

You are right, that we should do the things right from the beginning and persistent fingerprints when creating containers is the right thing.

I will also your advice an test the generate_private_key() and write_public_key() option from asyncssh over the next days. Once this is done, I will provide feedback.

thanks again for advice, greetings from Berlin, Stefan

slieberth commented 6 years ago

Hi Ron,

I thought about the use-cases and came to the conclusion that in my program a knob to adhere to known_hosts (or not) would be the best approach. Either the user cares about known hosts (when running test cases over the internet) or not (running in VPNs ...). For the later one it is not required to add actaully the keys to knwon_hosts ...

solution looks like

        try:
            if self.stepDict["sshKnownHostsCheck"]:
                self._conn = await asyncio.wait_for(asyncssh.connect(self.hostname,
                                    port=self.port,
                                    username=self.username, 
                                    password=self.password), timeout=self.timeout)
            else:
                self._conn = await asyncio.wait_for(asyncssh.connect(self.hostname,
                                    port=self.port,
                                    username=self.username, 
                                    known_hosts = None,       
                                    password=self.password), timeout=self.timeout)
        except:
            logging.error ("aioSshConnect.connect to {} failed".format(self.hostname))  

tests were successful :-) thanks for asyncssh and greetings from Berlin Stefan

ronf commented 6 years ago

As you noted, AsyncSSH does give you the option to completely disable the known hosts check, but it opens you up to the same security risk that ssh-keyscan does. There's basically nothing preventing a man-in-the-middle. That may be fine if you think the user will always set this option correctly based on whether the network is secure or not, but there's always a risk that this will get set and forgotten and the network may change later, so I still see it as a potential risk.

Given the various ways to securely get keys in and out of a container when creating it, I'd still lean toward that and always leaving known_hosts checking enabled, even on networks where you may not currently need it.

ronf commented 6 years ago

Closing this for now, since it looks like the question is answered. Feel free to open a new issue if you have additional questions or concerns.