Closed Jeroen0494 closed 4 years ago
@Jeroen0494
Thank you for your posting the issue.
From the message you provided, I simply guess node name you specified wasn't resolved in sshjump pod (a container created from corbinu/ssh-server
image).
Can you try again without --cleanup-jump
and let me know what happen?
kubectl ssh-jump --skip-agent acc-jri-ot-k8s003
In addition,
Can you upgrade ssh-jump to the latest one and try with node internal IP address?
I just released new version v0.3.2
which allows you to SSH login with node IP address.
# upgrade ssh-jump
kubectl krew upgrade ssh-jump
# get node IP address. Simply executing "kubectl ssh-jump" gives you the list of destination nodes as well as command usage. Or you can get node internal IP like this
kubectl get no -o custom-columns=Hostname:.metadata.name,Internal-IP:'{.status.addresses[?(@.type=="InternalIP")].address}
# then , try with node ip address
kubectl ssh-jump --skip-agent --cleanup-jump <node-IP-address>
Hi,
Thanks, I'll try next week when I'm back from holiday.
Jeroen
Hi,
I updated sshjump to the latest version:
$ kubectl krew update
Updated the local copy of plugin index.
New plugins available:
* roll
Upgrades available for installed plugins:
* ssh-jump v0.3.1 -> v0.3.2
$ kubectl krew upgrade
Updated the local copy of plugin index.
Upgrading plugin: access-matrix
Skipping plugin access-matrix, it is already on the newest version
Upgrading plugin: df-pv
Skipping plugin df-pv, it is already on the newest version
Upgrading plugin: krew
Skipping plugin krew, it is already on the newest version
Upgrading plugin: pod-dive
Skipping plugin pod-dive, it is already on the newest version
Upgrading plugin: ssh-jump
Upgraded plugin: ssh-jump
WARNING: You installed plugin "ssh-jump" from the krew-index plugin repository.
These plugins are not audited for security by the Krew maintainers.
Run them at your own risk.
And I tried jumping to the host based on hostname:
$ kubectl ssh-jump --skip-agent acc-jri-ot-k8s003
using: sshuser=j.rijken
using: identity=/home/jeroen/.ssh/id_rsa_saltaccers
using: pubkey=/home/jeroen/.ssh/id_rsa_saltaccers.pub
using: port=22
Creating SSH jump host (Pod)...
pod/sshjump created
Forwarding from 127.0.0.1:2222 -> 22
Forwarding from [::1]:2222 -> 22
Handling connection for 2222
nc: getaddrinfo: Name or service not known
ssh_exchange_identification: Connection closed by remote host
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
sshjump 1/1 Running 0 11s
$ kubectl exec -ti sshjump -- /bin/bash
root@sshjump:/# cat /etc/resolv.conf
nameserver 169.254.25.10
search default.svc.acc-jri-ot-k8s.privatehybridcloud.eu svc.acc-jri-ot-k8s.privatehybridcloud.eu acc-jri-ot-k8s.privatehybridcloud.eu
options ndots:5
root@sshjump:/# dig acc-jri-ot-k8s003
bash: dig: command not found
root@sshjump:/# ping acc-jri-ot-k8s003
ping: unknown host acc-jri-ot-k8s003
root@sshjump:/# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.233.121.14 sshjump
root@sshjump:/# exit
And again with an internal IP, that works!
$ kubectl ssh-jump --skip-agent 172.18.113.165
using: sshuser=j.rijken
using: identity=/home/jeroen/.ssh/id_rsa_saltaccers
using: pubkey=/home/jeroen/.ssh/id_rsa_saltaccers.pub
using: port=22
Forwarding from 127.0.0.1:2222 -> 22
Forwarding from [::1]:2222 -> 22
Handling connection for 2222
Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 4.15.0-91-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Thu May 28 08:09:26 CEST 2020
System load: 1.21
Usage of /: 43.5% of 19.49GB
Memory usage: 71%
Swap usage: 0%
Processes: 264
Users logged in: 0
IP address for ens160: 172.18.113.165
IP address for docker0: 172.17.0.1
IP address for kube-ipvs0: 10.233.0.1
IP address for tunl0: 10.233.102.0
IP address for nodelocaldns: 169.254.25.10
[...]
So you are correct in saying the DNS doesn't seem to work.
Some DNS debugging based on https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/ :
$ kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/dnsutils configured
$ kubectl exec -ti dnsutils -- nslookup kubernetes.default
Server: 169.254.25.10
Address: 169.254.25.10#53
Name: kubernetes.default.svc.acc-jri-ot-k8s.privatehybridcloud.eu
Address: 10.233.0.1
$ kubectl exec -ti dnsutils -- cat /etc/resolv.conf
nameserver 169.254.25.10
search default.svc.acc-jri-ot-k8s.privatehybridcloud.eu svc.acc-jri-ot-k8s.privatehybridcloud.eu acc-jri-ot-k8s.privatehybridcloud.eu
options ndots:5
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
dnsutils 1/1 Running 0 43s
sshjump 1/1 Running 0 8m58s
$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-76798d84dd-4zkw8 1/1 Running 0 27d
coredns-76798d84dd-n5fv2 1/1 Running 0 27d
$ for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done
.:53
[INFO] plugin/reload: Running configuration MD5 = da2db14ea70fc229c755c60b769d8c4e
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
.:53
[INFO] plugin/reload: Running configuration MD5 = da2db14ea70fc229c755c60b769d8c4e
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
$ kubectl get svc --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 27d
dashboard-metrics-scraper ClusterIP 10.233.63.132 <none> 8000/TCP 27d
kubernetes-dashboard ClusterIP 10.233.28.29 <none> 443/TCP 27d
metrics-server ClusterIP 10.233.11.114 <none> 443/TCP 27d
@Jeroen0494 Very sorry for the late response. Local DNS configuration issue is not a scope of this plugin. So please use IP address to ssh into your node istead of its name. Anyway, thank you so much for your posting issue.
Hi,
For Kubernetes on Ubuntu 18.04 hosts I get the following error:
Custer info:
Workstation info:
Adding the hostnames to my /etc/hosts file has no effect.
Let me know if you need more info.
Regards, Jeroen Rijken