Closed philips closed 4 years ago
Networking is really not my area, but it really would be great to be able to run a container I'm working on in my development environment in the network context of the k8s cluster.
I could move this to a proposal on the main kubernetes repo as well. Just a random idea and there isn't really a chat/mailing list for this repo. :)
Yeah, working in the context of my Google Container Engine cluster would be fantastic too. I'd definitely support that.
+1 👯
+1. I'd say we should make cluster local services also accessible from the host; essentially hide the fact that a VM is running for the end user. It is after all a local cluster
.
Where I work, we're working to make pod IPs routable, as something like Project Calico affords. One consequence of this is that we have removed NodePort bits from our Service definitions, and I'd rather not have to reintroduce those bits because I don't want such differences between "development" Service descriptions and "production" Service descriptions.
For this single-node k8s cluster manifested by minikube, is there a way to make the Pod IPs accessible from the developer workstation?
for Option 2: "GoVPN is simple free software virtual private network daemon, aimed to be reviewable, secure, DPI/censorship-resistant, written on Go."
I've futzed around a solution for doing this right now, with VirtualBox's host only networking + adding a static route on my host machine. Here's what I had to do:
I have written it up using VBoxManage
instead of the GUI at https://github.com/yuvipanda/jupyterhub-kubernetes-spawner/blob/master/SETUP.md, which is the project I'm working on :)
Haven't tested on OS X yet.
Just stumbled on this issue and wanted to mention that i made my own solution last year. It covers some of the usecases described in the ticket.
it really would be great to be able to run a container I'm working on in my development environment in the network context of the k8s cluster.
This was the original pretext - fast local development within a kubernetes context. There's also an optional feature to route Service traffic back to VPN clients. I'm also starting to use this as a PtP link between the kubernetes platform and some legacy applications that i can't (yet?) move. So far so good.
I did base it on openvpn as there's broad platform support and community knowledge on the subject (easier to adapt to specific needs). Take a look: https://github.com/pieterlange/kube-openvpn
Since minikube is meant to run in a local environment, on a single VM, I like the approach suggested by @yuvipanda (static local route) much better than the VPN idea for the following reasons:
This is only acceptable in a local environment, which is the main purpose of minikube anyway.
And yes it does work on macOS as well. cf. http://stackoverflow.com/a/42658974/4716370
+1
Could this be solved by running flanneld on both the developer host and the minikube VM? I am trying to validate this option. Has anybody tried it?
Flannel would give you a route to your pod network but:
--service-cluster-ip-range
) and there is no iptables
on macOS, so I think kube-proxy
is not an optionNot built-in, but Telepresence will let you get VPN-like access to your minikube (or any Kubernetes cluster).
This is more related to #384 and #950, however that was closed, and some people here might find this handy.
https://gist.github.com/elsonrodriguez/add59648d097314d2aac9b3c8931278b
Basically I've made a one-liner to add the ClusterIP as a route on OSX, and also made a small custom controller to enable LoadBalancer support for minikube (crudely).
If there's any interest I can polish up the controller/docs.
tl;dr
#etcd behavior changed
#sudo route -n add -net $(minikube ssh -- sudo docker run -i --rm --net=host quay.io/coreos/etcd:v3.2 etcdctl get /registry/ranges/serviceips | jq -r '.range') $(minikube ip)
sudo route -n add -net $(cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig.ServiceCIDR") $(minikube ip)
kubectl run nginx --image=nginx --replicas=1
kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer
kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system
nginx_external_ip=$(kubectl get svc nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
curl $nginx_external_ip
cc @yuvipanda @tony-kerz @waprin @r2d4 @whitecolor
EDIT: updated to remove etcd in the determining of the service ip range.
I tried Telepresence. It does the trick actually. But somehow it made running my unit test suite much slower (tests run from MacOS and connect to resources in Minikube via clusterIP services). I suspect there's some slowdown when talking over VPN to PostgreSQL running inside the minikube (which uses VirtualBox driver).
I didn't investigate further and switched to route add
/dnsmasq
method from http://stackoverflow.com/a/42658974/4716370. Seems to work too so far and test suite is fast again. But now I also occasionally hit the bug https://github.com/kubernetes/minikube/issues/1710. Not yet sure if there's correlation though.
I'm just putting my own setup here in case someone will find it useful. This will make your minikube pod and service IPs routable from your host. If you only want service IPs, you can edit this accordingly.
Environment
sudo route -n
you would use sudo ip route
in the script)Steps
1) Stop the minikube VM in case it's started
$ minikube stop
2) Go to the virtualbox GUI ( steps 2. and 3. are needed because of #1710 )
Adapter 1
and Adapter 2
select Advanced and change the adapter type to something other that Intel. I have PCne-FAST III (Am79C973)
3) Open the minikube config file which should be here: ~/.minikube/machines/minikube/config.json
and change the value for the fields: NatNicType
and HostOnlyNicType
to match the ones that you set in virtualbox in the previous step . In this case, I have Am79C973
4) Put this script somewhere, name it setup_minikube_routing.sh
and make it executable:
#!/bin/bash
set -e
MINIKUBEIP=minikube ip
echo "Minikube ip is $MINIKUBEIP"
sudo route -n delete 172.17.0.0/16 sudo route -n delete 10.0.0.0/24
sudo route -n add 172.17.0.0/16 $MINIKUBEIP sudo route -n add 10.0.0.0/24 $MINIKUBEIP
sudo mkdir -p /etc/resolver
cat << EOF | sudo tee /etc/resolver/cluster.local nameserver 10.0.0.10 domain cluster.local search_order 1 EOF
5) Stop docker on your machine as the ip ranges that you're adding routes for might overlap with the docker ones.
6) Start minikube and wait for it to start
`$ minikube start`
7) Run the script to set up the routes
`$ ./setup_minikube_routing.sh`
**Test if everything works**
1) Create an `my-nginx.yaml` file for an nginx deployment and a service
apiVersion: apps/v1beta1 kind: Deployment metadata: name: my-nginx labels: app: nginx spec: replicas: 1 template: metadata: labels: app: nginx spec: containers:
apiVersion: v1 kind: Service metadata: name: my-nginx labels: app: my-nginx spec: ports:
selector: app: nginx
2) Submit that to k8s
$ kubectl create -f my-nginx.yaml
3) Wait for everything to be running and:
curl http://my-nginx.default.svc.cluster.local:80
where default
is the name of the namespace you deployed nginx in. $ kubectl get po -o wide -l app=nginx
NAME READY STATUS RESTARTS AGE IP NODE
my-nginx-2302942331-h9h85 1/1 Running 0 1h 172.17.0.6 minikube
$ ping 172.17.0.6
PING 172.17.0.6 (172.17.0.6): 56 data bytes
64 bytes from 172.17.0.6: icmp_seq=0 ttl=63 time=0.284 ms
64 bytes from 172.17.0.6: icmp_seq=1 ttl=63 time=0.399 ms
--- 172.17.0.6 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.284/0.342/0.399/0.057 ms
Caveats and improvements
@ursuad have you managed to make it work with v0.24.1? I'm trying to make it work and so far it looks like 10.96.0.0/12
should be used instead of 10.0.0.0/24
and the new dns server ip is 10.96.0.10
.
Answering my own question, here's the variation of the above solution, accommodated for Minikube v0.24.1 (tested on MacOS 10.13.2 with kubeadm
bootstrapper and VirtualBox
driver). Note that I don't route 172.17.0.0/16
range of IPs because I didn't need it.
#!/usr/bin/env bash
# Configures the host network so that it looks like MacOS is inside the minikube cluster:
# it will have access to cluster IPs and DNS (see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/).
#
# NOTE: some applications (dig, nslookup, host, etc) will not query anything
# but the FIRST resolver and therefore will not respond with a useful answer for minikube host names.
#
# Inspired by:
# https://github.com/kubernetes/minikube/issues/38#issuecomment-339015592
#
# To test that script worked:
# * Make sure minikube dashboard works:
# minikube dashboard
# * Find out cluster IP of dashboard:
# kubectl get service kubernetes-dashboard --namespace=kube-system -o yaml | grep clusterIP
# * Assert that some HTML can be fetched via this IP:
# curl 10.xxx.xxx.xxx
# * Assert that DNS resolution works (you may have to wait for MacOS to apply the new settings or reboot before it works):
# curl kubernetes-dashboard.kube-system.svc.cluster.local
# OR
# dns-sd -G v4 kubernetes-dashboard.kube-system.svc.cluster.local
set -o errexit
set -o pipefail
set -o nounset
set -o xtrace
cd "${BASH_SOURCE%/*}"
readonly MINIKUBE_IP=$(minikube ip)
readonly MINIKUBE_IP_RANGE="10.96.0.0/12"
# Add access to internal cluster network.
sudo route -n delete "${MINIKUBE_IP_RANGE}"
sudo route -n add "${MINIKUBE_IP_RANGE}" "${MINIKUBE_IP}"
# Add cluster's DNS server.
sudo mkdir -p /etc/resolver
cat << EOF | sudo tee /etc/resolver/cluster.local
nameserver 10.96.0.10
domain cluster.local
EOF
Does anyone know why the proposed solutions by @metametadata and @ursuad are not working with Hyperkit driver?
I opened an issue on Hyperkit repo: https://github.com/moby/hyperkit/issues/178 But it seems like this may be something specific about how Minikube uses Hyperkit.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Just found out nice little VPN service written in Go and it seems to work fine with minikube.
https://github.com/twitchyliquid64/subnet/
So I followed the installation process in the repo, ran a server part on minikube machine (I'm using virtual box driver on macOS) then I ran a client on my Mac, I've specified network of the pod CIDR in minikube and than I was able to access the pod by IP from my browser.
It seems like minikube could adapt some of this code to implement some sort of an add-on. It also could be useful to run this VPN without any certificates since we are on a local network.
I really believe the day people will be able to access minikube pods and services by name as if they're inside the same network will revolutionize developer experience with Kubernetes!
Is there someone interested in digging into that more deeply? I would love to, but I'm afraid alone I won't be able to because of lack of time.
Maybe it's worth to implement it as a separate tool generic to any kubernetes installation, similar to https://www.telepresence.io (which is veeeeeery slow).
Nice! Thanks for the pointer to subnet. We're definitely going to look at something like this, and subnet looks like a great option.
I was playing with minikube this weekend and needed this feature.
I found a nice trick to help me with the NodePort hell: create a proxy (SOCKS/HTTP) and access the cluster through it.
Here is my setup. I do not recommend it for production deployments:
kubectl run --image=dannydirect/tinyproxy --port=8888 http-proxy -- ANY
kubectl port-forward deployments/http-proxy 8888
Set this PAC config on your browser (Can be applied on Chrome using Proxy helper chrome extension):
function FindProxyForURL(url, host)
{
if (shExpMatch(host, "*.svc.cluster.local"))
{
return "PROXY 127.0.0.1:8888";
}
return "DIRECT";
}
I've wanted PAC support for limiting this just to the .svc.cluster.local
suffix.
If you don't care about all your traffic going through kubernetes, skip steps 3 and configure localhost:8888
as your HTTP proxy. All your browsing traffic will go through the Kubernetes cluster.
You can also use this with curl: curl -x localhost:8888 http://my-awesome-service.default.svc.cluster.local/
Hope someone finds this useful :)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Question: does the new minikube tunnel
command resolve this issue? If not, do you mind describing your use case a bit more? Thanks!
According to docs, minikube tunnel
"creates a route to services deployed with type LoadBalancer". But in my case there're no LoadBalancer
services, only NodePort
ones.
In the end I want to access internal Kubernetes DNS hostnames in the host OS. So that, say, in Safari browser I could navigate to Minio admin at http://xxx.yyy.svc.cluster.local:zzzz/minio
. It would be also handy to have access to the same IPs as in the cluster (i.e. ranges 10.96.0.0/12 and maybe 172.17.0.0/16), but it's not something I personally need anymore really.
I posted a script (https://github.com/kubernetes/minikube/issues/38#issuecomment-351516904) I used for my case (but note that it doesn't work with Hyperkit driver).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Just FYI: another VPN written in Go: https://github.com/xitongsys/pangolin
I still dream about the day when I could run kubectl connect
or something like that, and access the cluster from a local machine as if I were on the same network :)
Here is how I do it, using port proxy rules available in Windows to establish SSH connection to a Nodeport service. My setup is Hyper-V on Windows 10 Pro. I hope it gives you some food for thought as a minimum.
Use internal VM Switch. You can set it up easily with the following powershell. It will take care of creating the VM switch if it doesn't exist and establish ICS between your internet connection and the internal VM Switch.
Set-ICS.ps1.txt
Open Powershell and call the script. In the following example it creates a VM Switch named 'minikube':
./Set-ICS.ps1 -VMSwitch minikube Enabled
Create your minikube VM. Open Powershell and call the following command. In the following example it creates a VM named 'minikube' using the VM switch named 'minikube':
minikube start --vm-driver hyperv --hyperv-virtual-switch minikube
From that point on, your VM 'minikube' is available internally in your computer under the hostname (VM Name).mshome.net, if you followed the previous instructions that is 'minikube.mshome.net'. It is ICS DHCP server that takes care of defining that hostname under C:\Windows\System32\drivers\etc\hosts.ics
Expose a service on a predefined Nodeport. Here is an example of a yaml that expose the port 22 of a container on Nodeport 30022, if you followed the previous instructions that is 'minikube.mshome.net:30022'. In my case this is an Open-SSH listening on port 22 so it allows me to SSH in my container. dev-service-bekno-worker-debug.yaml.txt
Then you can open the port on your laptop which has its own external IP address and own external hostname on your network. One way to do it in Powershell is the following:
netsh interface portproxy add v4tov4 listenaddress=127.0.0.1 listenport=2222 connectaddress=minikube.mshome.net connectport=30022
F**k yeah! In my case I can open SSH connection on port 2222 from another computer. That opens up an SSH connection to a container within minikube!!! You may have to change your firewall rules to allow incoming connection on port 2222. If the port 2222 or 30022 are not available because of other services running on it, the previous steps may fail, in which case you need to change the ports.
I hope it gets you to a working solution for your setup. Definitely there is a lack of support about minikube on Windows. But I am committed to use it since it allows for greater productivity overall.
Have a look at this issue if you wonder why I use an internal VM Switch #5072 .
Slack just open sourced their global overlay network solution https://slack.engineering/introducing-nebula-the-open-source-global-overlay-network-from-slack-884110a5579
It’s a mix of IPSec and TincVPN, but simpler, faster and written in Go. This maybe useful for creating an overlay network between the host and Minikube pods.
@burdiyan - interesting news!
If someone wants to make this work, let me know. Help wanted!
I was not really using minikube for a while, and checked it out again recently. And I discovered the existence of minikube tunnel
command, which exactly solves the issue being discussed here, I think. I'm using it with macOS using Hyperkit driver and it all works perfectly. I ditched Docker for Mac and just use the docker daemon inside the minikube VM.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
One of the major hurdles people have using k8s as a development platform is having easy access to DNS and uncomplicated access to "localhost" ports.
This might be something we can tackle in this bug, I discussed the idea here: https://github.com/coreos/coreos-kubernetes/issues/444
Option 1 - Fancy Proxy
This is an idea to make working with the single-node cluster easier. The basic idea would be to have something like
kubectl port-forward
that forwards every nodePort to localhost based on the original targetPort. So, for example:This would be a huge boon to people trying to use kubernetes as a development workflow for running caches, services, etc and developing against those APIs.
Psuedo code event loop:
Option 2 - VPN
Having a simple VPN setup would allow a user to get access to cluster DNS and cluster networking. The downside here is that stuff like OpenVPN, etc is a major hurdle. Anyone know of a simple VPN in Go that works cross platform?