Closed jefflill closed 3 years ago
These commands are currently relying on the local context extension for information on the cluster nodes. This won't work in the future when nodes may be added or removed from the cluster after deployment.
We need to query the Kubernetes API server to gather this this information (kind of like we did for the old Docker Swarm implementation).
I'm going to go ahead and remove these commands. Here's the logic:
These commands assume that the user is on the same network as the cluster nodes. In the old neonHIVE days, we integrated OpenVPN into the cluster and had code to automatically connect client workstations to the VPN for SSH/SCP as well as for other operations like cluster setup and workload management.
We dropped OpenVPN when we switched to Kubernetes and neonKUBE a couple years ago and we really don't want to encourage folks to SSH directly into nodes or be VPNed into the cluster network by default. I can see bringing OpenVPN back in the future as an optional feature, but I suspect that will be more about connecting clusters and/or on-premise networks rather than being an end-user feature.
The plan for end-user Kubernetes access is to tunnel Kubernetes commands and dashboards through standard cluster ingress.
The old neonHIVE neon-cli used to have commands allowing folks to easily SSH/SCP cluster nodes via PuTTY and WinSCP. We removed these for neonKUBE because we removed the integrated cluster OpenVPN.
We've extended
IHostingManager
to support adding/removing SSH NAT rules on demand. These rules provide public SSH access (with optional source address white/black listing), so we should bring these commands back:neon ssh <node-name>
neon scp <node-name>
neonKUBE node have SSH configured with a very strong password (the same one for all nodes) and needs to be configured with a SSH server certificate as well (also the same across nodes). We then need to:
and add these commands to manage the NAT rules:
neon cluster connect
neon cluster disconnect
neon cluster status