Closed ORESoftware closed 2 years ago
Does yes | ob deploy init ...
work?
@3noch it don't work, unfortunately...I think that's because it's designed so that that technique doesn't work
we tried:
yes | ob deploy init
yes yes | ob deploy init
right yeah that would be insecure
Actually that's probably good. That's very insecure.
There is no way to avoid this check, but that's intentional. Perhaps I should ask: Why do you want this?
ob deploy
checks against a list of known host public keys stored in the configuration directory, rather than the one on the machine that happens to be doing a particular deployment. This is because, in the event that you need to switch from one deploy machine / bastion host to another, we want to be absolutely sure that you're still connecting to the machines you think you are, even if that deploy machine / bastion host has never connected to them before. We don't want to create a workflow that encourages people to accept host keys without checking them, since that could result in leaking production secrets to anyone who manages to MITM you, e.g. via DNS spoofing or cache poisoning. (Note that an active attack is a circumstance where you may need to quickly switch bastion hosts, for example because the attacker has taken one down or you have taken it down in case it was compromised, and it's also a circumstance where you might need to deploy to production, for example to fix an exploit or rotate keys.)
In order to prepare for this, ob deploy init
asks you to manually confirm the host keys, and then stores them in the configuration directory; that way, you shouldn't need to confirm them when you run ob deploy
, just once on ob deploy init
.
I think a good approach here, for where you already have the host keys, is to pre-populate the backend_known_hosts
file in the ob deploy init
directory with the correct host keys. I'm not sure whether ob deploy init
currently works if that file's already in place, but we should make sure it does. I've added https://github.com/obsidiansystems/obelisk/issues/514 to track that.
I would urge you (or anyone else reading this) in the strongest terms not to construct any production workflow that does not involve strongly verifying the identities of deployment target machines.
@ryantrinkle we can't pre-populate the backend_known_hosts
file in the ob deploy init directory because that directory does not exist yet. The problem I referring to in the OP exists because we get this prompt when we run the ob deploy init
command. We are telling it which server to push to (using --hostname "$server"
), please take a second look at the OP.
OK, I think I did miss some of the nuance in your first post; sorry about that! Given that you already have a verified key in ~/.ssh/known_hosts
, I do think that ob deploy init
could piggy-back off of that, on the theory that if someone has managed to poison ~/.ssh/known_hosts
, you're probably already compromised enough that ob deploy init
can't do anything about it.
So, here's what I think could work:
~/.ssh/known_hosts
, we use that record~/.ssh/known_hosts
require manual validation, like the current behaviorSounds good to me. For people that have dynamic environments where they need ob deploy init
dirs on demand, this will be helpful for automation.
Yes this seems like a nice improvement without loss of security.
@ryantrinkle it's not only that known_hosts has the host in it, it's also that we are explicitly including the host, eg ob deploy init --hostname "$server"
I believe this to be resolved by #916 . Please re-open the issue if it still persists.
I have this command:
what happens is we get a prompt:
since the host is already known (it's in the .ssh/known_hosts file) - is there a way to update
ob deploy init
to run ssh so that it doesn't prompt for a host check?we tried doing this:
but doesn't seem to override whatever ssh commands are run by
ob deploy init
.Perhaps
ob deploy init
can accept a known-hosts argument: