kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.59k stars 4.89k forks source link

Proxy for easier access to NodePort services #38

Closed philips closed 4 years ago

philips commented 8 years ago

One of the major hurdles people have using k8s as a development platform is having easy access to DNS and uncomplicated access to "localhost" ports.

This might be something we can tackle in this bug, I discussed the idea here: https://github.com/coreos/coreos-kubernetes/issues/444

Option 1 - Fancy Proxy

This is an idea to make working with the single-node cluster easier. The basic idea would be to have something like kubectl port-forward that forwards every nodePort to localhost based on the original targetPort. So, for example:

# User does something like this
$ kubectl run --image quay.io/philips/golang-outyet outyet
$ kubectl expose deployment outyet --target-port=8080 --port=8080 --type=NodePort

# This is the part that needs automating: 
$ socat tcp-listen:8080,reuseaddr,fork tcp:172.17.4.99:$(kubectl get service outyet -o template --template="{{range.spec.ports}}{{.nodePort}}{{end}}")

This would be a huge boon to people trying to use kubernetes as a development workflow for running caches, services, etc and developing against those APIs.

Psuedo code event loop:

for {
  for _, e := range kubernetesServiceEvent() {
    if e == newNodePort {
      go proxy(context, e.NodePort, e.NodeIP, "localhost", e.TargetIP)
    } 
    if e == dyingNodePort {
      contexts[e.NodePort].Done()
    }
  } 
}

Option 2 - VPN

Having a simple VPN setup would allow a user to get access to cluster DNS and cluster networking. The downside here is that stuff like OpenVPN, etc is a major hurdle. Anyone know of a simple VPN in Go that works cross platform?

ghost commented 8 years ago

Networking is really not my area, but it really would be great to be able to run a container I'm working on in my development environment in the network context of the k8s cluster.

philips commented 8 years ago

I could move this to a proposal on the main kubernetes repo as well. Just a random idea and there isn't really a chat/mailing list for this repo. :)

ghost commented 8 years ago

Yeah, working in the context of my Google Container Engine cluster would be fantastic too. I'd definitely support that.

keithballdotnet commented 8 years ago

+1 👯

vishh commented 8 years ago

+1. I'd say we should make cluster local services also accessible from the host; essentially hide the fact that a VM is running for the end user. It is after all a local cluster.

ae6rt commented 8 years ago

Where I work, we're working to make pod IPs routable, as something like Project Calico affords. One consequence of this is that we have removed NodePort bits from our Service definitions, and I'd rather not have to reintroduce those bits because I don't want such differences between "development" Service descriptions and "production" Service descriptions.

For this single-node k8s cluster manifested by minikube, is there a way to make the Pod IPs accessible from the developer workstation?

ram-argus commented 8 years ago

for Option 2: "GoVPN is simple free software virtual private network daemon, aimed to be reviewable, secure, DPI/censorship-resistant, written on Go."

yuvipanda commented 8 years ago

I've futzed around a solution for doing this right now, with VirtualBox's host only networking + adding a static route on my host machine. Here's what I had to do:

  1. minikube start
  2. minikube stop
  3. open vbox manager
  4. find the 'minikube' VM
  5. click settings on the minikube VM
  6. under networks, pick 'adapter 3' (2 adapters will already be used)
  7. Select 'host only', open advanced, check 'cable connected'
  8. hit ok
  9. minikube start
  10. sudo ip route delete 172.17.0.0/16 (make sure you don't have docker running on your host)
  11. sudo ip route add 172.17.0.0/16 via $(minikube ip)

I have written it up using VBoxManage instead of the GUI at https://github.com/yuvipanda/jupyterhub-kubernetes-spawner/blob/master/SETUP.md, which is the project I'm working on :)

Haven't tested on OS X yet.

pieterlange commented 7 years ago

Just stumbled on this issue and wanted to mention that i made my own solution last year. It covers some of the usecases described in the ticket.

it really would be great to be able to run a container I'm working on in my development environment in the network context of the k8s cluster.

This was the original pretext - fast local development within a kubernetes context. There's also an optional feature to route Service traffic back to VPN clients. I'm also starting to use this as a PtP link between the kubernetes platform and some legacy applications that i can't (yet?) move. So far so good.

I did base it on openvpn as there's broad platform support and community knowledge on the subject (easier to adapt to specific needs). Take a look: https://github.com/pieterlange/kube-openvpn

antoineco commented 7 years ago

Since minikube is meant to run in a local environment, on a single VM, I like the approach suggested by @yuvipanda (static local route) much better than the VPN idea for the following reasons:

This is only acceptable in a local environment, which is the main purpose of minikube anyway.

And yes it does work on macOS as well. cf. http://stackoverflow.com/a/42658974/4716370

andyp1per commented 7 years ago

+1

ghost commented 7 years ago

Could this be solved by running flanneld on both the developer host and the minikube VM? I am trying to validate this option. Has anybody tried it?

antoineco commented 7 years ago

Flannel would give you a route to your pod network but:

itamarst commented 7 years ago

Not built-in, but Telepresence will let you get VPN-like access to your minikube (or any Kubernetes cluster).

elsonrodriguez commented 7 years ago

This is more related to #384 and #950, however that was closed, and some people here might find this handy.

https://gist.github.com/elsonrodriguez/add59648d097314d2aac9b3c8931278b

Basically I've made a one-liner to add the ClusterIP as a route on OSX, and also made a small custom controller to enable LoadBalancer support for minikube (crudely).

If there's any interest I can polish up the controller/docs.

tl;dr

#etcd behavior changed
#sudo route -n add -net $(minikube ssh  --  sudo docker run -i --rm --net=host quay.io/coreos/etcd:v3.2 etcdctl  get /registry/ranges/serviceips  | jq -r '.range') $(minikube ip)
sudo route -n add -net $(cat ~/.minikube/profiles/minikube/config.json | jq -r ".KubernetesConfig.ServiceCIDR") $(minikube ip)

kubectl run nginx --image=nginx --replicas=1
kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer

kubectl run minikube-lb-patch --replicas=1 --image=elsonrodriguez/minikube-lb-patch:0.1 --namespace=kube-system

nginx_external_ip=$(kubectl get svc nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
curl $nginx_external_ip

cc @yuvipanda @tony-kerz @waprin @r2d4 @whitecolor

EDIT: updated to remove etcd in the determining of the service ip range.

metametadata commented 7 years ago

I tried Telepresence. It does the trick actually. But somehow it made running my unit test suite much slower (tests run from MacOS and connect to resources in Minikube via clusterIP services). I suspect there's some slowdown when talking over VPN to PostgreSQL running inside the minikube (which uses VirtualBox driver).

I didn't investigate further and switched to route add/dnsmasq method from http://stackoverflow.com/a/42658974/4716370. Seems to work too so far and test suite is fast again. But now I also occasionally hit the bug https://github.com/kubernetes/minikube/issues/1710. Not yet sure if there's correlation though.

ursuad commented 7 years ago

I'm just putting my own setup here in case someone will find it useful. This will make your minikube pod and service IPs routable from your host. If you only want service IPs, you can edit this accordingly.

Environment

Steps

1) Stop the minikube VM in case it's started $ minikube stop 2) Go to the virtualbox GUI ( steps 2. and 3. are needed because of #1710 )

MINIKUBEIP=minikube ip

echo "Minikube ip is $MINIKUBEIP"

clean up the routes

sudo route -n delete 172.17.0.0/16 sudo route -n delete 10.0.0.0/24

Add the routes

sudo route -n add 172.17.0.0/16 $MINIKUBEIP sudo route -n add 10.0.0.0/24 $MINIKUBEIP

sudo mkdir -p /etc/resolver

cat << EOF | sudo tee /etc/resolver/cluster.local nameserver 10.0.0.10 domain cluster.local search_order 1 EOF


5) Stop docker on your machine as the ip ranges that you're adding routes for might overlap with the docker ones. 

6) Start minikube and wait for it to start
`$ minikube start`

7) Run the script to set up the routes
`$ ./setup_minikube_routing.sh`

**Test if everything works**
1) Create an `my-nginx.yaml` file for an nginx deployment and a service

apiVersion: apps/v1beta1 kind: Deployment metadata: name: my-nginx labels: app: nginx spec: replicas: 1 template: metadata: labels: app: nginx spec: containers:

2) Submit that to k8s $ kubectl create -f my-nginx.yaml

3) Wait for everything to be running and:

Caveats and improvements

metametadata commented 6 years ago

@ursuad have you managed to make it work with v0.24.1? I'm trying to make it work and so far it looks like 10.96.0.0/12 should be used instead of 10.0.0.0/24 and the new dns server ip is 10.96.0.10.

metametadata commented 6 years ago

Answering my own question, here's the variation of the above solution, accommodated for Minikube v0.24.1 (tested on MacOS 10.13.2 with kubeadm bootstrapper and VirtualBox driver). Note that I don't route 172.17.0.0/16 range of IPs because I didn't need it.

#!/usr/bin/env bash
# Configures the host network so that it looks like MacOS is inside the minikube cluster:
# it will have access to cluster IPs and DNS (see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/).
#
# NOTE: some applications (dig, nslookup, host, etc) will not query anything
# but the FIRST resolver and therefore will not respond with a useful answer for minikube host names.
#
# Inspired by:
# https://github.com/kubernetes/minikube/issues/38#issuecomment-339015592
#
# To test that script worked:
#  * Make sure minikube dashboard works:
#     minikube dashboard
#  * Find out cluster IP of dashboard:
#     kubectl get service kubernetes-dashboard --namespace=kube-system -o yaml | grep clusterIP
#  * Assert that some HTML can be fetched via this IP:
#     curl 10.xxx.xxx.xxx
#  * Assert that DNS resolution works (you may have to wait for MacOS to apply the new settings or reboot before it works):
#     curl kubernetes-dashboard.kube-system.svc.cluster.local
#     OR
#     dns-sd -G v4 kubernetes-dashboard.kube-system.svc.cluster.local

set -o errexit
set -o pipefail
set -o nounset
set -o xtrace
cd "${BASH_SOURCE%/*}"

readonly MINIKUBE_IP=$(minikube ip)
readonly MINIKUBE_IP_RANGE="10.96.0.0/12"

# Add access to internal cluster network.
sudo route -n delete "${MINIKUBE_IP_RANGE}"
sudo route -n add "${MINIKUBE_IP_RANGE}" "${MINIKUBE_IP}"

# Add cluster's DNS server.
sudo mkdir -p /etc/resolver
cat << EOF | sudo tee /etc/resolver/cluster.local
nameserver 10.96.0.10
domain cluster.local
EOF
burdiyan commented 6 years ago

Does anyone know why the proposed solutions by @metametadata and @ursuad are not working with Hyperkit driver?

burdiyan commented 6 years ago

I opened an issue on Hyperkit repo: https://github.com/moby/hyperkit/issues/178 But it seems like this may be something specific about how Minikube uses Hyperkit.

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

burdiyan commented 6 years ago

/remove-lifecycle stale

burdiyan commented 6 years ago

Just found out nice little VPN service written in Go and it seems to work fine with minikube.

https://github.com/twitchyliquid64/subnet/

So I followed the installation process in the repo, ran a server part on minikube machine (I'm using virtual box driver on macOS) then I ran a client on my Mac, I've specified network of the pod CIDR in minikube and than I was able to access the pod by IP from my browser.

It seems like minikube could adapt some of this code to implement some sort of an add-on. It also could be useful to run this VPN without any certificates since we are on a local network.

I really believe the day people will be able to access minikube pods and services by name as if they're inside the same network will revolutionize developer experience with Kubernetes!

Is there someone interested in digging into that more deeply? I would love to, but I'm afraid alone I won't be able to because of lack of time.

Maybe it's worth to implement it as a separate tool generic to any kubernetes installation, similar to https://www.telepresence.io (which is veeeeeery slow).

dlorenc commented 6 years ago

Nice! Thanks for the pointer to subnet. We're definitely going to look at something like this, and subnet looks like a great option.

omribahumi commented 6 years ago

I was playing with minikube this weekend and needed this feature.

I found a nice trick to help me with the NodePort hell: create a proxy (SOCKS/HTTP) and access the cluster through it.

Here is my setup. I do not recommend it for production deployments:

  1. HTTP proxy (chosen becuase of PAC support): kubectl run --image=dannydirect/tinyproxy --port=8888 http-proxy -- ANY
  2. Kubectl port forwarding: kubectl port-forward deployments/http-proxy 8888
  3. Set this PAC config on your browser (Can be applied on Chrome using Proxy helper chrome extension):

    function FindProxyForURL(url, host)
    {
    if (shExpMatch(host, "*.svc.cluster.local"))
    {
        return "PROXY 127.0.0.1:8888";
    }
    
    return "DIRECT";
    }
  4. Access Cluster services using DNS: http://my-awesome-service.default.svc.cluster.local:8080/

I've wanted PAC support for limiting this just to the .svc.cluster.local suffix. If you don't care about all your traffic going through kubernetes, skip steps 3 and configure localhost:8888 as your HTTP proxy. All your browsing traffic will go through the Kubernetes cluster.

You can also use this with curl: curl -x localhost:8888 http://my-awesome-service.default.svc.cluster.local/

Hope someone finds this useful :)

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

metametadata commented 6 years ago

/remove-lifecycle stale

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

metametadata commented 5 years ago

/remove-lifecycle stale

tstromberg commented 5 years ago

Question: does the new minikube tunnel command resolve this issue? If not, do you mind describing your use case a bit more? Thanks!

metametadata commented 5 years ago

According to docs, minikube tunnel "creates a route to services deployed with type LoadBalancer". But in my case there're no LoadBalancer services, only NodePort ones.

In the end I want to access internal Kubernetes DNS hostnames in the host OS. So that, say, in Safari browser I could navigate to Minio admin at http://xxx.yyy.svc.cluster.local:zzzz/minio. It would be also handy to have access to the same IPs as in the cluster (i.e. ranges 10.96.0.0/12 and maybe 172.17.0.0/16), but it's not something I personally need anymore really.

I posted a script (https://github.com/kubernetes/minikube/issues/38#issuecomment-351516904) I used for my case (but note that it doesn't work with Hyperkit driver).

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

burdiyan commented 5 years ago

/remove-lifecycle stale

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

metametadata commented 5 years ago

/remove-lifecycle rotten

burdiyan commented 5 years ago

Just FYI: another VPN written in Go: https://github.com/xitongsys/pangolin

I still dream about the day when I could run kubectl connect or something like that, and access the cluster from a local machine as if I were on the same network :)

brainfull commented 5 years ago

Here is how I do it, using port proxy rules available in Windows to establish SSH connection to a Nodeport service. My setup is Hyper-V on Windows 10 Pro. I hope it gives you some food for thought as a minimum.

  1. Use internal VM Switch. You can set it up easily with the following powershell. It will take care of creating the VM switch if it doesn't exist and establish ICS between your internet connection and the internal VM Switch. Set-ICS.ps1.txt Open Powershell and call the script. In the following example it creates a VM Switch named 'minikube': ./Set-ICS.ps1 -VMSwitch minikube Enabled

  2. Create your minikube VM. Open Powershell and call the following command. In the following example it creates a VM named 'minikube' using the VM switch named 'minikube': minikube start --vm-driver hyperv --hyperv-virtual-switch minikube

  3. From that point on, your VM 'minikube' is available internally in your computer under the hostname (VM Name).mshome.net, if you followed the previous instructions that is 'minikube.mshome.net'. It is ICS DHCP server that takes care of defining that hostname under C:\Windows\System32\drivers\etc\hosts.ics

  4. Expose a service on a predefined Nodeport. Here is an example of a yaml that expose the port 22 of a container on Nodeport 30022, if you followed the previous instructions that is 'minikube.mshome.net:30022'. In my case this is an Open-SSH listening on port 22 so it allows me to SSH in my container. dev-service-bekno-worker-debug.yaml.txt

  5. Then you can open the port on your laptop which has its own external IP address and own external hostname on your network. One way to do it in Powershell is the following: netsh interface portproxy add v4tov4 listenaddress=127.0.0.1 listenport=2222 connectaddress=minikube.mshome.net connectport=30022

  6. F**k yeah! In my case I can open SSH connection on port 2222 from another computer. That opens up an SSH connection to a container within minikube!!! You may have to change your firewall rules to allow incoming connection on port 2222. If the port 2222 or 30022 are not available because of other services running on it, the previous steps may fail, in which case you need to change the ports.

I hope it gets you to a working solution for your setup. Definitely there is a lack of support about minikube on Windows. But I am committed to use it since it allows for greater productivity overall.

Have a look at this issue if you wonder why I use an internal VM Switch #5072 .

burdiyan commented 5 years ago

Slack just open sourced their global overlay network solution https://slack.engineering/introducing-nebula-the-open-source-global-overlay-network-from-slack-884110a5579

It’s a mix of IPSec and TincVPN, but simpler, faster and written in Go. This maybe useful for creating an overlay network between the host and Minikube pods.

tstromberg commented 4 years ago

@burdiyan - interesting news!

If someone wants to make this work, let me know. Help wanted!

burdiyan commented 4 years ago

I was not really using minikube for a while, and checked it out again recently. And I discovered the existence of minikube tunnel command, which exactly solves the issue being discussed here, I think. I'm using it with macOS using Hyperkit driver and it all works perfectly. I ditched Docker for Mac and just use the docker daemon inside the minikube VM.

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/minikube/issues/38#issuecomment-629175403): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.