docker-archive / swarm-frontends

Deploy Kubernetes with Swarm
Apache License 2.0
182 stars 50 forks source link

Exposing a service? #2

Open skevy opened 8 years ago

skevy commented 8 years ago

Hi there -

I setup a Swarm cluster on Azure using this: https://github.com/Azure/azure-quickstart-templates/tree/master/docker-swarm-cluster

and then followed the tutorial/used the compose file in this repo to get a Kubernetes cluster running. I also followed the instructions to setup multi-host networking on my Swarm cluster, and from what I can tell, that is all working.

I'm able to deploy the Wordpress example (and everything says, Ready, status OK, etc etc).

But I have absolutely no idea how to access the thing. Because there's (at least) four layers of networking here: Physcial IP address of the machine (my nodes are on a 192.168.0.0/24 range) which is what my load balancer needs to point to, at the end of the day; the Swarm overlay network (titled "kubernetes", and with an IP range of 10.10.1.0/24); the Swarm bridge network ("docker_gwbridge", in the 172.18.0.0/16 range); and then what --service-cluster-ip-range is set to in the docker-compose file (172.17.17.1/24).

When I expose the port (using Kubernetes NodePort service type)...I can't access anything unless I access the pod directly.

Do you all have any thoughts on how to make this work? Or any possible guidance here? It would be much appreciated.

Thanks for putting this project together!

-Adam

skevy commented 8 years ago

Also, fwiw, I did read the section in the README regarding networking...but my understanding of that passage was that it was inconvenient, not impossible.

dongluochen commented 8 years ago

@skevy There are different network features can help http://docs.docker.com/engine/userguide/networking/. I think the simple way is to add port mapping for your container. They can be specified by -P or -p in docker command. It exposes container ports through its hosts (your Azure machine) by NAT. -P picks random available ports from host. -p allows you specify dedicated port. In case of load balancing, -p may be more convenient. Here is an example.

dchen@vm4:$ docker -H 192.168.56.202:2372 run --name mynginx1 -P -d nginx
e876ca74d80454172e8b8e6921ec4d53cd657bdcef9e3c2efc99885fb3077d41
dchen@vm4:~$ docker -H 192.168.56.202:2372 ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                         NAMES
e876ca74d804        nginx               "nginx -g 'daemon off"   10 days ago         Up 10 days          **192.168.56.203:32769->80/tcp, 192.168.56.203:32768->443/tcp**   vm3/mynginx1
dchen@vm4:$ wget http://192.168.56.203:32769
--2015-12-04 11:18:59--  http://192.168.56.203:32769/
Connecting to 192.168.56.203:32769... connected.
HTTP request sent, awaiting response... 200 OK
abronan commented 8 years ago

Hi @skevy, sorry for the late reply and thanks for pointing this out, There are definitely limits due to networking, mainly because of the use of an overlay network to deploy Kubernetes (which was just a way to show that it is easy to deploy across cloud providers and make the Compose file a little bit more convenient).

My guess from what I can recall is that it will try to expose the service mapping the virtual service IP to the internal overlay IP rather than trying to expose it on the host, thus making it impossible to reach outside of the overlay. (it is possible using docker itself to do that but Kubernetes has no idea that it is itself deployed inside an overlay in this case but it's supposed to run directly on the host using the host networking).

On the other hand, I think NodePort should work just fine without the overlay networking layer in the Compose file to deploy Kubernetes. Just exposing Kubernetes with -net=host in the Compose file and tweaking a little bit to point to the right apiserver/etcd dynamically. I wish I can revisit and offer an alternative not using the overlay.

But I'm also curious to see if I can make this work flawlessly with Kubernetes deployed in the overlay somehow (even if this involves an external component added to the Compose file). Also I'm curious and I should experiment attaching containers that are in a pods dynamically to an overlay and see if I can correct the networking to expose a port on the host and make Kubernetes use the right information for the Load Balancer).

I hope this gives you a few more element! :smile: My advice is that if you want to play with this a little more, you should consider removing the overlay element of the deployment as it hinders the networking and the service exposure for now.