kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.2k stars 4.87k forks source link

none: bind to the localhost interface by default #4313

Open tstromberg opened 5 years ago

tstromberg commented 5 years ago

For improved security by default.

tstromberg commented 5 years ago

Related: #2762

vnzongzna commented 5 years ago

Can I work on this issue? This would be my first contribution in k8s, might need some guidance

tstromberg commented 5 years ago

@vaibhavk - Yes, I would love help on this. Much of this goes toward rectifying https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md#decreased-security - but in the future with Docker/Podman deployments, it will also be necessary.

What I propose is that when run with the none driver, the services listed in that document should all bind to 127.0.0.1 by default. We'll also use this in the future when we support docker/podman deployments. Some further implementation details:

One approach for kubelet might be to have code that adds ExtraOptions if the driver is none. See https://github.com/kubernetes/minikube/blob/33c217eedcb93357df18cd7b2534014c46866d37/pkg/minikube/bootstrapper/kubeadm/kubeadm.go#L430

Let me know if you would like more guidance. Feel free to reach out on Slack #minikube as well if you prefer real-time discussion. Thank you!

medyagh commented 5 years ago

@vaibhavk are you still working on this ?

vnzongzna commented 5 years ago

@medyagh Yes, I'm back on it

afbjorklund commented 5 years ago

If we bind everything to localhost on the VM, how will you access it from the developer machine ?

afbjorklund commented 5 years ago

This feature would be another reason why generic (#4733) is needed. The current workaround of running --vm-driver=none on a remote VM would no longer work properly after this, if it only listens on localhost. It would require you to ssh into the control plane from your developer machine, in order to reach the apiserver. It's probably a good feature for none, though. It was never supposed to expose it outside of localhost.

itsallonetome commented 5 years ago

I've been struggling with this ... an alternative (which would work for me) is an option to specify whether minikube should take its external interface to be the host's IP address (as at present) or localhost or the docker bip gateway.

At present, I can find no way to force minikube to use localhost. using --extra-config kubelet.node-ip="127.0.0.1" breaks multi-pod systems as the pods listen on 127.0.0.1 but other pods try to talk to them on the hosts's external IP address.

I get:

$ kubectl cluster-info
Kubernetes master is running at https://localhost:8443

$ minikube ip
10.74.54.212

$ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:98:08:a7 brd ff:ff:ff:ff:ff:ff
    inet 10.74.54.212/23 brd 10.74.55.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe98:8a7/64 scope link
       valid_lft forever preferred_lft forever
7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:5a:ca:32:1d brd ff:ff:ff:ff:ff:ff
    inet 172.18.1.1/24 brd 172.18.1.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:5aff:feca:321d/64 scope link
       valid_lft forever preferred_lft forever

I can't use the other drivers as the VMs in which I'm working have virtualisation disabled.

vnzongzna commented 5 years ago

/assign

avisiedo commented 4 years ago

Hi.... I have a situation where I would like just the opposite. I mean, I am starting the cluster with minikube, and I would like the sockets to be bound to 0.0.0.0 so that I can access from another host. I now that minikube is not for production, and it is not my intention to use this for production, but make this bind to external IP is good for me. I am using --vm-driver=none to start the kubernetes cluster.

So in short: I could I start minikube to allow it to bind to 0.0.0.0?? Any configuration file to set up to allow this?? I am just starting with Kubernetes, sorry if I don't have enough skills yet.

Thanks!

ykfq commented 4 years ago

Still need this feature when there are mutilple nics.

medyagh commented 4 years ago

@vaibhavk are you still interested to do this ?

vnzongzna commented 4 years ago

@medyagh I recently switched to Darwin & not able to test the build, is there any work around for this? I'll be happy to work on this again and send PR

medyagh commented 4 years ago

Hi.... I have a situation where I would like just the opposite. I mean, I am starting the cluster with minikube, and I would like the sockets to be bound to 0.0.0.0 so that I can access from another host. I now that minikube is not for production, and it is not my intention to use this for production, but make this bind to external IP is good for me. I am using --vm-driver=none to start the kubernetes cluster.

So in short: I could I start minikube to allow it to bind to 0.0.0.0?? Any configuration file to set up to allow this?? I am just starting with Kubernetes, sorry if I don't have enough skills yet.

Thanks!

@avisiedo opens ups big security problems,since minikbue is directed at develoipers for local kubernetes, that would be a bad default ! however I would accept any PR that would add that as an optional feature. (with extra warnning to the user that if they take the risk we could allow that)

medyagh commented 4 years ago

@medyagh I recently switched to Darwin & not able to test the build, is there any work around for this? I'll be happy to work on this again and send PR @vnzongzna yes ofcourse you can still work on this. please let me know if you need any help on the PR review

BartDrown commented 3 years ago

Hi.... I have a situation where I would like just the opposite. I mean, I am starting the cluster with minikube, and I would like the sockets to be bound to 0.0.0.0 so that I can access from another host. I now that minikube is not for production, and it is not my intention to use this for production, but make this bind to external IP is good for me. I am using --vm-driver=none to start the kubernetes cluster.

So in short: I could I start minikube to allow it to bind to 0.0.0.0?? Any configuration file to set up to allow this?? I am just starting with Kubernetes, sorry if I don't have enough skills yet.

Thanks!

Anything new on this? I'm looking for way to access minikube pod in cluster within local network and I cannot do it in any way except using socat.

I've tried to do this with iptables, but currently communication worked only in one way.

In case that someone would like to use it anyway with socat I post solution below: socat TCP-LISTEN:<HOST_PORT>,fork TCP:<CLUSTER_ADDRESS>:<POD_PORT> &

tstromberg commented 3 years ago

@KubaJakubowski - That would be a completely different issue from binding to localhost, and well out of the scope of minikube.

My personal recommendation would be an SSH tunnel, but socat would probably work as well. Alternatively, you should probably just run kubeadm directly if you need network access. It's what minikube runs underneath.