k3d-io / k3d

Little helper to run CNCF's k3s in Docker
https://k3d.io/
MIT License
5.33k stars 456 forks source link

[Enhancement] Ingress port mapping #11

Closed goffinf closed 5 years ago

goffinf commented 5 years ago

Using k3s and docker-compose I can set a port binding for a node and the create an ingress using that to route into a pod ... lets say I bind port 8081:80, where port 80 is used by an nginx pod ... I can use localhost to reach nginx ..

http://localhost:8081

How can this be achieved using k3d ?

goffinf commented 5 years ago

@mash-graz just following up with your comment ....

... your kubeconfig seems to use an API server entry, which points to localhost:6443. ... just edit the server entry of your kubeconfig and use the IP of your machines network card instead

Unfortunately that doesn't appear to work. No kubectl commands succeed with that amendment and WSL also crashes. Obviously the default for k3s is localhost.

I thought that I might be able to pass this via the bind-address server arg as you can with k3s,..

sudo k3s server --bind-address 192.168.0.29 ...

but I couldn't see anything in the k3d docs which suggests how k3s server args are exposed. Do you know ?

goffinf commented 5 years ago

@iwilltry42 So I have installed v1.2.0-beta.1 and run k3d with this ...

k3d create --publish 8081:8081@server --workers 2

I can see port 8081 published on the server ..

docker container ls -a
CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS              PORTS                                            NAMES
c367af69df28        rancher/k3s:v0.5.0   "/bin/k3s agent"         59 seconds ago       Up 56 seconds                                                        k3d-k3s-default-worker-1
0211bedcfb27        rancher/k3s:v0.5.0   "/bin/k3s agent"         About a minute ago   Up 58 seconds                                                        k3d-k3s-default-worker-0
e30c8789d6da        rancher/k3s:v0.5.0   "/bin/k3s server --h…"   About a minute ago   Up About a minute   0.0.0.0:6443->6443/tcp, 0.0.0.0:8081->8081/tcp   k3d-k3s-default-server

I have a deployment and service for nginx where the service is listening on 8081

apiVersion: v1
kind: Service
metadata:
  name: nginx-demo
  labels:
    app: nginx-demo
spec:
#  type: NodePort
  ports:
    - port: 8081
      targetPort: 80
      name: http
  selector:
    app: nginx-demo

Would you expect to be able to successful call that service on 8081. If I try curl ...

curl http://localhost:8081 -v
* Rebuilt URL to: http://localhost:8081/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.58.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact

Added an Ingress (no change) ...

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-demo
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx-demo
          servicePort: 8081

What am I missing ?

Thanks

Fraser.

iwilltry42 commented 5 years ago

@mash-graz just following up with your comment ....

... your kubeconfig seems to use an API server entry, which points to localhost:6443. ... just edit the server entry of your kubeconfig and use the IP of your machines network card instead

Unfortunately that doesn't appear to work. No kubectl commands succeed with that amendment and WSL also crashes. Obviously the default for k3s is localhost.

I thought that I might be able to pass this via the bind-address server arg as you can with k3s,..

sudo k3s server --bind-address 192.168.0.29 ...

but I couldn't see anything in the k3d docs which suggests how k3s server args are exposed. Do you know ?

You can pass k3s server args to k3d using the --server-arg/-x flag. E.g. k3d create -x "--bind-address 192.168.0.29" or k3d create -x --bind-address=192.168.0.29

iwilltry42 commented 5 years ago

@iwilltry42 So I have installed v1.2.0-beta.1 and run k3d with this ...

k3d create --publish 8081:8081@server --workers 2

I can see port 8081 published on the server ..

docker container ls -a
CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS              PORTS                                            NAMES
c367af69df28        rancher/k3s:v0.5.0   "/bin/k3s agent"         59 seconds ago       Up 56 seconds                                                        k3d-k3s-default-worker-1
0211bedcfb27        rancher/k3s:v0.5.0   "/bin/k3s agent"         About a minute ago   Up 58 seconds                                                        k3d-k3s-default-worker-0
e30c8789d6da        rancher/k3s:v0.5.0   "/bin/k3s server --h…"   About a minute ago   Up About a minute   0.0.0.0:6443->6443/tcp, 0.0.0.0:8081->8081/tcp   k3d-k3s-default-server

I have a deployment and service for nginx where the service is listening on 8081

apiVersion: v1
kind: Service
metadata:
  name: nginx-demo
  labels:
    app: nginx-demo
spec:
#  type: NodePort
  ports:
    - port: 8081
      targetPort: 80
      name: http
  selector:
    app: nginx-demo

Would you expect to be able to successful call that service on 8081. If I try curl ...

With the Manifest above I wouldn't expect it to work, since NodePort is commented out, so no port is exposed on the node. But even then, NodePort range is 30000-32767, so one of those ports has to be set and exposed for it to work.

curl http://localhost:8081 -v
* Rebuilt URL to: http://localhost:8081/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.58.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact

Added an Ingress (no change) ...

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-demo
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx-demo
          servicePort: 8081

What am I missing ?

You didn't map the ports for ingress, so that wouldn't work as well. I'll create a demo for this :+1:

mash-graz commented 5 years ago

You can pass k3s server args to k3d using the --server-arg/-x flag. E.g. k3d create -x "--bind-address 192.168.0.29" or k3d create -x --bind-address=192.168.0.29

yes -- that's the correct answer to question, but i don't think, it will solve the troubles described by @goffinf.

it doesn't matter to which IP the k3s server API is bound inside the container, because from the outside it's always reached via this port forwarding internally specified by k3d (0.0.0.0:6443->6443/tcp) which maps it to all interfaces on the host side by this 0.0.0.0 notation. it should be therefore reachable on the host as https://localhost:6443 just as by using the public server name resp. one of the external IPs of the machine.

perhaps @goffinf is fighting some windows/WSL specific issues, but on linux i do not have any troubles to reach the API from outside of k3ds docker instance, neither locally on the host nor by remote access, and it doesn't make a difference if kubectl is used or kubefwd.

iwilltry42 commented 5 years ago

@goffinf this is a simple example of what I tested with k3d (on Linux):

  1. Create a cluster, mapping the ingress port 80 to localhost:8081 k3d create --api-port 6550 --publish 8081:80 --workers 2

  2. Get the kubeconfig file export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"

  3. Create a nginx deployment kubectl create deployment nginx --image=nginx

  4. Create a ClusterIP service for it kubectl create service clusterip nginx --tcp=80:80

  5. Create an ingress object for it with kubectl apply -f

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
    name: nginx
    annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
    spec:
    rules:
    - http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80
  6. Curl it via localhost curl localhost:8081/

That works for me.

iwilltry42 commented 5 years ago

@goffinf or the same using a NodePort service:

  1. Create a cluster, mapping the port 30080 from worker-0 to localhost:8082 k3d create --publish 8082:30080@k3d-k3s-default-worker-0 --workers 2 -a 6550

...

  1. Create a NodePort service for it with kubectl apply -f

    apiVersion: v1
    kind: Service
    metadata:
    labels:
    app: nginx
    name: nginx
    spec:
    ports:
    - name: 80-80
    nodePort: 30080
    port: 80
    protocol: TCP
    targetPort: 80
    selector:
    app: nginx
    type: NodePort
  2. Curl it via localhost curl localhost:8082/

goffinf commented 5 years ago

@iwilltry42 I can confirm that using the latest version (1.2.0-beta.2) that the Ingress example works as expected with WSL. I can use curl localhost:8081 directly from WSL and within a browser on the host.

Moreover Ingress works using a domain also. In this case I created the k3d cluster and mapped port 80:80 for the server (default) providing access to the Ingress Controller on that port rather the 8081 ...

k3d create --publish 80:80 --workers 2
...
docker container ls
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                                        NAMES
eedb8c962387        rancher/k3s:v0.5.0   "/bin/k3s agent"         30 seconds ago      Up 27 seconds                                                    k3d-k3s-default-worker-1
96ca910c7949        rancher/k3s:v0.5.0   "/bin/k3s agent"         32 seconds ago      Up 29 seconds                                                    k3d-k3s-default-worker-0
e10a95dc10b4        rancher/k3s:v0.5.0   "/bin/k3s server --h…"   34 seconds ago      Up 32 seconds       0.0.0.0:80->80/tcp, 0.0.0.0:6443->6443/tcp   k3d-k3s-default-server

Then defined the deployment, service and ingress as follows (noting the ingress now defines the host domain) ...

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-demo-dom
  labels:
    app: nginx-demo-dom
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo-dom
  template:
    metadata:
      labels:
        app: nginx-demo-dom
    spec:
      containers:
      - name: nginx-demo-dom
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-demo-dom
  labels:
    app: nginx-demo-dom
spec:
  ports:
    - port: 8081
      targetPort: 80
      name: http
  selector:
    app: nginx-demo-dom
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-demo-dom
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - host: k3d-ingress-demo.com
    http:
      paths:
      - backend:
          serviceName: nginx-demo-dom
          servicePort: 8081

Using curl the services was reachable ..

curl -H "Host: k3d-ingress-demo.com" http://localhost

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
...
</html>

So congrats, the publish capability and Ingress are working fine and very naturally iro k8s. Great work

Changing the url to something non-existant returns the default backend 404 response as expected ...

curl -H "Host: k3d-ingress-demox.com" http://localhost
404 page not found

curl localhost
404 page not found

curl localhost/foo
404 page not found

Finally (again as expected but good to confirm) requests are properly load balanced across the 2 replicas that were defined in the deployment, alternating on each request.

Regards

Fraser.

goffinf commented 5 years ago

@iwilltry42 In your example which now appears on the github README, was there a reason you chose to use the --api-port arg (it doesn't seem to materially impact whether the example works or not, so I wasn't sure if you were showing that for some other reason ?)

k3d create --api-port 6550 ...

iwilltry42 commented 5 years ago

Hey @goffinf , thank you very much for your feedback and for confirming the functionality of the new feature! No, it was just that 6443 is constantly in use on my machine and I just left it in there so that people see the --api-port flag instead of the --port flag which we want to "deprecate" (i.e. change functionality). Do you think it's too confusing? Then I'd rather remove it :+1:

UPDATE: I removed the -a 6550 from the NodePort example and added a note regarding the --api-port flag to the ingress example :+1:

goffinf commented 5 years ago

Haha, beat me to it. I was going to suggest that it would not be confusing if you added a note.

In general I prefer plenty of examples that show off one, or a small number of features, rather than a single example that has everything packed into it, especially where there might be a difference in behaviour for particular combinations. You’ve done that now, so that’s perfect.

Talking of documentation and examples, the question I asked a few days ago around passing additional server args is I think worth documenting (i.e. using —server-arg or -x) and provides an opportunity to talk briefly about the integration between k3d and k3s. I don’t know whether it’s possible to mirror every k3s arg or not (if that is the case you could simply link through to the k3s docs rather than repeat it all I guess) ?

I suspect others might also be interested in how, or indeed if, k3d will track the life-cycle of k3s and respond as/if/when new features are added or changed. IMO that’s an important consideration when selecting tools that app devs might adopt for a variety of reasons. Whilst everyone accepts the ephemeral nature of open source projects and, as in this case, if the user experience is relatively intuitive such that the skills investment isn’t high, it’s less of a concern, but ... it’s still nice to back tools that have a strong likelihood of a longer shelf-life and an active community. Just a thought.

I note the new FAQ section. Happy to help out here although I am aware of how important it is to ensure that all docs are accurate and up-to-date.

iwilltry42 commented 5 years ago

Well... with --server-arg you can pass any argument to the k3s server... but if it will work in the end, we cannot verify. It'd be a huge amount of additional work to ensure/verify that all the k3s settings are working in a dockerized environment. E.g. to support the docker flag --docker for k3s, you'd have to put it in a dind image and/or pull through the docker socket from the host system.

Anyways, I'm totally in for adding additional documentation and would be super happy about your contributions to them, since you appear to be a very active user :)

Maybe we can come to the point where we'll be able to create a compatibility matrix for k3d and k3s :+1:

goffinf commented 5 years ago

Precisely. I spend a good deal of time at my place of employment writing up a variety of docs, from best practice guides and standard prototypes to run books. I can’t claim to be brilliant at it, but I do recognise the importance of clear information which illustrate through descriptions and examples the key use cases, and, importantly set out the scope. The latter plays to your comment about any potential tie-in (or not) with k3s, since many no doubt view k3d as a sister project or one that implies some level of dependency. I think it would be good to set that out and the extent to which that is true, perhaps especially so as docker as a container run-time has somewhat less focus these days (you can take Darren’s comment about k3s ... of course I did a DinD implementation .. in a couple of ways I guess).

I have noted from our conversations and other issues, both here and on k3s and k3os (I tend to read them all since there is much to be learned from other people’s concerns, as well as an opportunity to help sometimes) that there is still a level of ‘hidden’ configuration that is not obvious. That is not to say it’s deliberate, it most often to do with the time available to work on new features vs. documenting existing ones, and of course an assumed level of (pre) knowledge.

Anyways, I am active because I think this project has merit and potential for use by me and my work colleagues. So anything I can do to help I will.

I note Darren commented recently that WSL2 and k3d would be a very satisfactory combination, and I agree. But, since we aren’t in the business of vapourware, there’s still much to offer without WSL2 imo.

I think the next non-rc release might provide a good moment to review docs and examples.

iwilltry42 commented 5 years ago

I'm looking forward to your contributions to k3d's docs :) Maybe we can open a new issue/project for docs, where we can add parts, which users might like to see there :+1:

Anyways... I think this issue is growing a bit too big. I guess the main painpoint of this issue has been solved, right? So can it be closed then @goffinf?

mash-graz commented 5 years ago

The network-mode=host feature we could add with a hint that it will only work for Linux users. yes, i still think this variant could be a worthwhile and extraordinary user friendly option on linux machines. i'll try to test it and prepare a PR for this feature as soon as possible.

i finally could figure out an implementation of this alternative manner to expose the most common network access variants by a simple --host/--hostnetwork option and opened PR #53.

it has some pros (e.g. you don't have to specify all the ports resp. can be reconfigure them via k8s mechanism), but also cons (e.g. it will most likely only work on the linux platform).

in fact it's only exposing the server on the host network, because remapping multiply workers resp. their control ports on one machine isn't a trivial taskt. connecting the workers to server on the host network is also a bit tricky, because most of dockers internal name services do not work across different networks or aren't available on linux machines. i therefore had use the gateway IP of our custom network as workaround to reach the host...

i'm not sure, if it is really a useful improvement, after all this wonderful recent port mapping improvmenst, developed be @iwilltry42 and @andyz-dev, nevertheless i would be happy, if you could take a look at it.

iwilltry42 commented 5 years ago

Thanks for your PR @mash-graz , I just have to dig a bit deeper into the networking part to leave a proper review.

goffinf commented 5 years ago

@iwilltry42 My thoughts exactly. This issue has served its purpose and an initial implementation has been delivered. Thank you. I am happy to close this down and raise any additional work as new issues.