Closed aluzzardi closed 7 years ago
/cc @docker/fiesta-cucaracha-maintainers (especially @diogomonica) @ehazlett
Agreed with @ehazlett on the preference for having it on the manager. I'm concerned on how the SSH key gets to the client in the first place though, even with a forwarding solution.
All the uncertainty around how authentication happens to the manager further plays into this. If the user authenticates himself to the manager using SSH, we could simply put that public key in all the agents, and proxy a connection. If the user is connecting to the "engine" using TLS or local unix socket, then I'm not exactly sure how we would implement this.
We may want to look into a middle ground here. The solution will be much more robust and scalable if the managers coordinate authentication and port location and then the ssh connection goes directly to the box. There may exist applications that may require fast and scalable ssh access to containers.
If the box is not available on the network, then so be it, that is the design of the network. We can design a companion plugin that can proxy/forward ssh connections, but, in general, this should be offloaded from the managers. They will already be doing quite a bit of connections and cluster coordination such that responsiveness will be poor, especially if trying to debug problematic tasks on a heavily loaded cluster.
Let's take a peak at a proposed workflow from an instantiation of swarmctl ssh <task id/name>
:
swarmctl ssh
then executes the client's local ssh
command, if found, or uses the Go ssh library as a fallback, using the provided parameters from the request.The primary functionality here is the introduction of an SSH redirect, which cannot be done with ssh (as far as I can tell, with good reason). Building this hook let's us be quite flexible in the future. We get a number of follow on benefits:
Closing in favor of duplicate #1896.
Attaching doesn't feel like a cluster management operation. Additionally, since agents connect to managers rather than the other way around, it wouldn't work seamlessly.
What if instead we provided an ssh service out of band from the cluster API to allow attach (debugging) operations?
It could either be:
ssh
ing into the node's IP at our SSH port