k3d-io / k3d

Little helper to run CNCF's k3s in Docker
https://k3d.io/
MIT License
5.26k stars 453 forks source link

[Enhancement] Ingress port mapping #11

Closed goffinf closed 5 years ago

goffinf commented 5 years ago

Using k3s and docker-compose I can set a port binding for a node and the create an ingress using that to route into a pod ... lets say I bind port 8081:80, where port 80 is used by an nginx pod ... I can use localhost to reach nginx ..

http://localhost:8081

How can this be achieved using k3d ?

zeerorg commented 5 years ago

Maybe issue #6 is a relevant issue in this regard ? Though for port-forwarding the current recommended way of doing it is using kubectl port-forward

iwilltry42 commented 5 years ago

I would also opt for kubectl port-forward as @zeerorg said. But, since ingress can in many cases be the only service that needs ports mapped to the host, I could imagine adding an extra flag to k3d create for ingress port mapping. E.g. k3d create --ingress-http 30080 --ingress-https 30443 for mapping http/https ports to the host system. Or a single flag for mapping any arbitrary port.

WDYT

mash-graz commented 5 years ago

a working solution to specify the port forward during k3d creation would be indeed very helpful!

goffinf commented 5 years ago

Unsurprisingly I can confirm that using kubectl port-forward does work, but ... I would still much prefer to define Ingress resources

mash-graz commented 5 years ago

but ... I would still much prefer to define Ingress resources

1+

the actual behavior looks rather inconvenient and insufficient to me.

if it's possible forward the API network connectivity ports to the public network, it should be done resp. be configurable for ingress ports as well. without this feature k3d is IMHO hardly usable for serious work.

iwilltry42 commented 5 years ago

So I'd go with an additional (string-slice) flag like --extra-port <host:k3s> here. No since it's most wanted to use this for ingress, it should be enough to expose the ports specified here on the server node, right? Or we take a more sophisticated approach and extend it to --extraport <host:k3s:node>, where node can be either server, workers, all or <node name>. Any opinions on that?

goffinf commented 5 years ago

Being able to specify the node ‘role’ is more flexible if we are just talking about exposing ports in the general sense. I’m not sure I can think of a use case for using these for an Ingress object for Control Plane or Etcd (and as yet there is no separation of these roles - but that might happen in the future ?), but it’s still better to have the option. So here the prototype would be something like ...

—add-port <role>:<port>

Where role can be worker (default), controlplane, or etcd. (or just server if cp and etcd will always be combined)

mash-graz commented 5 years ago

i'm not sure, if it really makes sense, to search for a more abstract / future proof / complicated command line syntax in this particular case?

in fact, we just want to utilize same very simple docker-API "PortBindings" functionality in all of this cases -- isn't it?

i therefore would simply extended the usability of the existing -p/--port command line -- i.e. make it usable multiple times [for API connectivity and an arbitrary list of ingress ports] and allow "container-port:host-port" pairs for usage scenarios with more than one instance in parallel. this would look rather similar to the expose syntax in dockers CLI resp. a natural and commonly expected simple wrapper behavior.

iwilltry42 commented 5 years ago

I agree with you @mash-graz that we could re-use the --port flag, but I think that right now it would break UX, since in the original sense --port X simply mapped port X to the K8s API port in the server container. This functionality we would break by introducing the --port c:h syntax, so we would at least need to find a transitional solution.

I also think like supported by @goffinf that it would be a good thing to narrow it down to node roles, where setting the role would be optional. @goffinf: I think only --add-port <role>:<port> needs a notion of host and container port. To stick to the docker syntax I'd go with --add-port <hostport>:<containerport>:role, say "map this port from my host to this port on those nodes".

@mash-graz: any idea how we could transition from our current --port <server-port> syntax to the syntax mentioned above? And would it fulfill your needs?

goffinf commented 5 years ago

@iwilltry42 That works for me.

mash-graz commented 5 years ago

@iwilltry42

any idea how we could transition from our current --port syntax to the syntax mentioned above? And would it fulfill your needs?

yes -- i also have some doubts concerning the backwards compatibility of such a change. and indeed, an additional --add-port-option could solve this risk in a very reliable manner. but is it really neccesarry?

nevertheless i could accept the --add-port alternative just as well.

iwilltry42 commented 5 years ago

You know what? Why not both? We could create a minor release of 1.x with the --add-port flag and a major release 2.0.0 (which can totally bring in breaking changes) with the extended --port flag.

andyz-dev commented 5 years ago

Borrowing from the Docker CLI, we could also consider using --publish for mapping host ports into the k3s node ports. In fact, I am working a pull request for it. It would be great to assign the -p short hand to this option as well. (I am also o.k. with --add-port if it is more preferred)

I think it is useful to keep the api port spec separate from the --publish option. Since the worker nodes need to known where the API port is for joining the cluster. How about we change it to --api-port ,-a, which takes a string argument in the form of 'ip:host-port'?

iwilltry42 commented 5 years ago

You're right there, I didn't think of that... I like you're suggestion with the api-port :+1:

mash-graz commented 5 years ago

if a working minor release could be realized with any usable solution, i'll be happy. :) i don't want convince anybody, just help to find a working solution resp. rethink it from another point of view...

How about we change it to --api-port ,-a, which takes a string argument in the form of 'ip:host-port'?

that's an interesting amendment, because in some cases it could indeed make sense to bind the exposed API connectivity to just one specified host-IP/network instead of 0.0.0.0 for security reasons!

iwilltry42 commented 5 years ago

So to wrap things up, I'd suggest doing the following:

For the next minor release:

For the next major release:

Any final comments on that? @andyz-dev , are you already working on a PR for this? If not, I'd have time now :+1:

iwilltry42 commented 5 years ago

BTW: If you didn't already, you might want to consider joining our slack channel #k3d on https://rancher-users.slack.com after signing up here: https://slack.rancher.io/ :wink:

andyz-dev commented 5 years ago

@iwilltry42 I already have --publish working, just polishing it before sending out the pull request. I will also rename it to --add-port.

I am not working on--api-port, nor on deprecating --port. Please feel free to take them up.

iwilltry42 commented 5 years ago

@andyz-dev Alright, I'll base the next steps on the results of your merge, so I don't have to deal with all the merge conflicts :grin:

mash-graz commented 5 years ago

hmmm... i really like the idea, to prevent the actual CLI behavior till the next major release in semantic versioning conformance, but nevertheless we should try to reduce the needed changes to the bare minimum.

i would therfore suggest:

  1. add --publish without a short option rigtht now for the ingress forwarding right now (=next minor release)
  2. support --api-port/-a in parallel to the existing/deprecated --port/-p

in this case users could use the new syntax from now on and any revoke or redefinition of --port/-p at one of the next major releases shouldn't affect them anymore.

btw: i was just looking again, how docker interprets all those possible colon separated variants:

https://docs.docker.com/engine/reference/run/#expose-incoming-ports

this myriad of variants is perhaps an overkill for our actual purpose, nevertheless i would at least try to stay somehow close and compatible to this well known conventions...

iwilltry42 commented 5 years ago

@mash-graz , yep, like that procedure :+1: Awaiting @andyz-dev's PR now for --publish or --add-port and will base the other changes on top of that.

Regarding all the possibilities of port-mappings, I'm on your side there that we should stick close to what docker does. Though I'd really like to put the notion of node roles (or at some point in the future also node IDs) in there somehow so that we can specify which nodes should have those ports mapped to the host.

andyz-dev commented 5 years ago

@iwilltry42 @mash-graz O.K. I will stick with --publish for now, and add --add-port as its alias.

mash-graz commented 5 years ago

Regarding all the possibilities of port-mappings, I'm on your side there that we should stick close to what docker does. Though I'd really like to put the notion of node roles (or at some point in the future also node IDs) in there somehow so that we can specify which nodes should have those ports mapped to the host.

yes -- it definitely makes sense, to catch the different ways of exposing services in k8s by more adequate/selective forwarding strategies in the long run...

iwilltry42 commented 5 years ago

Thanks for the PR #32 @andyz-dev ! @goffinf and @mash-graz , maybe you want to have a look there as well :+1:

mash-graz commented 5 years ago

thanks @andyz-dev ! :+1: that's a much more complex patch than expected.

please correct me, if i'm totally wrong, but i don't think, this forwarding of all worker nodes is necessary or useful for typical ingress/LoadBalancer scenarios -- e.g. when k3ds traefik default installation will be utilized. in this case, all the internal routing is already concentrated/bound to just on single IP-addr port pair within the docker context. we only have to forward it from this internal docker network to the real public outer world -- i.e. one of the more common networks of the host.

but again: maybe i'm totally wrong concerning this point? -- please don't hesitate to correct me!

but your approach could make some sense for some of the other network exposing modes of k8s. although i would at least suggest 1000-steps as port offset in this case to minimize conflicts -- e.g. other daemons listening on privileged standard ports (<=1024).

andyz-dev commented 5 years ago

thanks @andyz-dev ! 👍 that's a much more complex patch than expected.

I feel the same way. Any suggestion on how to simplify it?

please correct me, if i'm totally wrong, but i don't think, this forwarding of all worker nodes is necessary or useful for typical ingress/LoadBalancer scenarios -- e.g. when k3ds traefik default installation will be utilized. in this case, all the internal routing is already concentrated/bound to just on single IP-addr port pair within the docker context. we only have to forward it from this internal docker network to the real public outer world -- i.e. one of the more common networks of the host.

In the product we are working on, we need our LB to run on the worker nodes. For HA, we usually run more than 2 LBs. I think there is a need to exposing ports on more than one worker nodes.

I agree with you that exposing ports on all works is overkill. Would the "node role" concept proposed by @iwilltry42 work for you? May be we should add it soon.

but again: maybe i'm totally wrong concerning this point? -- please don't hesitate to correct me!

but your approach could make some sense for some of the other network exposing modes of k8s. although i would at least suggest 1000-steps as port offset in this case to minimize conflicts -- e.g. other daemons listening on privileged standard ports (<=1024).

Notice we are only stepping the host ports. May be we can add a --publish-stepping option. Then again, this may be a moot point with "node role"

iwilltry42 commented 5 years ago

@mash-graz I agree with you there, it got way more complex than I first expected it to be. But I cannot think of a simpler solution than what @andyz-dev did (good job!). I already worked on combining his solution with the node role (and regexp validation of input), but it will make the whole thing even more complex (but also cool). I'd go with an extended docker syntax like ip:hostPort:containerPort/protocol@node where only containerPort is mandatory and @node can be used multiple times (either node role or node name, while all is default).

For the port clashes I was thinking of something like a --auto-remap flag instead of doing it by default? We could even go as far as automatically checking if ports are already being used with a simple net.Listen() to avoid crashes.

goffinf commented 5 years ago

My 4 cents on complexity ... and forgive me for stating the obvious.

The experience for a Dev and for and Ops should be largely similar in terms of configuration and deployment as for any Kubernetes installation. This isn’t some ‘hello-world’ demo environment but intended to be ‘production ready’ at least for edge devices (although personally I don’t draw any distinction or try an pigeonhole k3d/k3s in that way).

In my case, and I suspect many others, I need to provide a local Kubernetes environment for all the application developer teams that are migrating existing apps into a micro-services / container based solutions architecture, and to support the majority of new apps that we build. For that environment to be genuinely useful we need to be able to use the same set of Kubernetes constructs to deploy and expose workloads and use the same automation thru build and deploy pipelines as we do for our hosted environments. Without that it will just create a misalignment which I suspect will relegate k3d to ‘just for demos and training’. That’s not what I need, there are plenty of those tools available already.

So a key aspect is being able to route external traffic into this local cluster through external load balancers and Ingress. That’s what we do ‘in the wild’, so that’s what we need to do here. I am aware we could use NodePort or host port mapping, but we don’t use that approach much, so for me, Ingress is the preferred option. I want to rock up to my local k3d cluster and run my cluster config scripts and my application helm install (the exact same ones I use in Production - give or take), and a few seconds later be doing Development work or running integration tests, whatever.

So .... from a complexity perspective, tool chain builders always get the most challenging work to do, but the benefit is to the thousands of others who consume the platform and don’t care how the magic happens and complain when it doesn’t or it behaves differently in some (sometimes small and acceptable) way. Such is the lot for those of us that build tools for others, but it’s the most exciting job of all.

Please don’t see this as a rant against k3d or the difficult choices in terms of what needs to be supported or not, and which features have priority. I am massively appreciative of the work and integrity of everyone who is contributing whether that’s code or comments. I just wanted to orient my ‘ask’ in such a way that clearly says that I don’t really want to be in a position of introducing k3d to my dev teams by starting off with all the things that it doesn’t support or worse, does support but completely differently to the next environment along.

Like many others we believe in the whole immutable concept ‘cool-ade’ and would be loathed to move from that position and maintain different solutions for the same ‘thing’.

HTHs

Fraser.

iwilltry42 commented 5 years ago

Thanks for the explanation @goffinf , but I think that you don't need to justify yourself. I'm totally on your side there, I'd value functionality over complexity as well :+1: Originally I built k3d from @zeerorg's idea, just because the other tools like kind, minikube, etc. didn't fulfill my needs for local development... so I guess we have similar needs there ;)

mash-graz commented 5 years ago

i think, we all agree on the perspective and criteria expressed by @goffinf. we just have to find a sufficient solution to realize this challenging demands also in practice.

i still see typical ingress and LB setups utillizing a single 'Endpoint'-entry as the most realistic usage scenario. this kind of simple one [external] port A -> one [docker] port B mapping should be therefore supported without to much confusing implementation overheap and rethinking on the users side.

maybe we could archive this goal by just stipulate the proposed role "server" instead of "all" as the default behavior to get just this simple one-to-one mapping for k3s. nevertheless i have my doubts, if this really works for multi master clusters and it also doesn't really correspond to the intended semantics of this role-notation.

if we really want to support other network exposing capabilities of k8s beside this relative simple one-to-one ingress scenarios, we maybe indeed forced to forward and map the ports from all worker nodes to the host. nevertheless i would see this kind of network access translation more as a workaround for test and development purposes, because it differs to much from the internal topology and addressing. it's IMHO just a workaround to get access to more fancy k8s internals and networking exposing capabilities, but from my point of view it doesn't seem to be necessary for more common practical usage scenarios.

i also wouldn't underestimate all the necessary provisions to support dynamic changes in this case -- i.e. add and remove worker nodes -- and update the port mapping in a consistent manner during all this changes.

the same could be expressed for more demanding/exotic HA/LB setups, which IMHO can not be simple 'simulated' resp. transferred to docker environments and arbitrary bundles of incremented numbered ports, but would need at least multiply [virtual] IP endpoints on the host side as well to work as expected [i.e. simulate a take over by utilizing the same port numbers] in this kind of translated contexts.

but again: maybe i really don't get the point?

iwilltry42 commented 5 years ago

Merged PR #34 by @andyz-dev (Thank you!). Pre-Release with the --publish/--add-port flags pushed: https://github.com/rancher/k3d/releases/tag/v1.2.0-beta.0

iwilltry42 commented 5 years ago

I will base my work of adding the notion of node roles on top of that.

mash-graz commented 5 years ago

thanks @iwilltry42 for this beta! i'll try to test it tonight...

but could you please elaborate a little bit more about your idea concerning this "role" specifier?

should it be understood just as a kind of label based selection mechanism, which would e.g. allow to expose only ports for a subset of services, or is it more intended as a flag, which helps do handle the various available "Types" of k8s network exposure by different and in each case adequate means (e.g. selectively bind it only to the desired network within the docker context)?

its perhaps also useful to find a sufficient documentation string resp. explanation for the default behavior (i.e. if 'role' isn't explicitly specified).

thanks

iwilltry42 commented 5 years ago

You're welcome @mash-graz , but the credits should go to our new collaborator @andyz-dev who implemented it :+1:

The "role" specifier is nothing that complex. It's only to map ports only from certain types of nodes that you create with k3d. Let's call it node-specifier from now on. Then you could e.g. write --publish 0.0.0.0:1234:4321/tcp@workers to map port 4321 from the worker nodes (containers in k3d) to localhost (for now by using the offset to avoid port clashes) or --publish 3333@k3d-mycluster-worker-1@k3d-mycluster-worker-2 to map 3333 from the two specified nodes to localhost. So the specifier could be either one of server, workers, all or <node-name>. Default would then be all which makes the whole thing backwards compatible to what we have right now with the beta.

But feel free to drop suggestions on what you would expect :+1:

mash-graz commented 5 years ago

You're welcome @mash-graz , but the credits should go to our new collaborator @andyz-dev who implemented it +1

sorry -- i had the impression, that you merged improvements from your "new-publish" branch as well. and frankly, i like those additional changes much more, than the original PR proposed by @andyz-dev, because his implementation IMHO simply ignores the specific needs of a sufficient k8s "ingress" handling.

concerning your node-specifier i'm still contemplating, if it looks useful/worthwhile to me? at least it opens some capabilities, which simply couldn't realized otherwise...

nevertheless i would see a more sufficient consideration of the different expose variants and their proper handling as much more important. but i don't think, this can/should be handled/accomplished/intermixed with your proposed notation.

mash-graz commented 5 years ago

a few additional considerations concerning the security impact of this feature:

i don't think, we should forward/expose any network ports or network traffic, belonging to interfaces, which are conceptually understood as only internally accessible in k8s (e.g. bound to the ClusterIP).

only those network endpoints, which are intended as public accessible via ExternalIPs in k8s should be available outside of k3ds docker cage and forwarded by this feature! everything else should be still isolated from the outer world resp. only accessible via more sophisticated and secure mechanism, like kubectl proxy!

i think, we really should respect this very important general security requirement, otherwise we will make nonsense of k3ds main goal and work against the idea of reliable sandbox utilization. k3d should be useful tool to encapsulate a working cluster installation and naturaly provide access to its public network endpoints -- just like the real thing! --, but it shouldn't go further and undermine important k8s security concepts.

andyz-dev commented 5 years ago

I like the node-specifier concept in general. I am fine with 'all', 'server' and 'workers'. Since the is something internally generated, user may not know what it is until it has been generated (In theory, we could even change the way it is generated from release to release). How about we also allow a number as a valid node name specifier.

For example, to create a five node cluster with the first two work nodes as ingress nodes, the k3d command line will look like:

k3d create --workers 4 --publish 8080:80@0@1

It will be shorter and easier to type as well.

For my eyes, the following is a bit easier to read than above:

k3d create --workers 4 --publish 8080:80[0,1]

To be clear, I am not suggesting that we do away with supporting the full node-name. In a blog post, the full node name will more readable.

mash-graz commented 5 years ago

hmm... i'm definitely not happy with all this questionable half-baked workarounds:

therefore a radical new proposal:

shouldn't we simply implement a command line option to enable the NetworkMode=host just as we could set it in dockers CLI.

unfortunately this docker feature isn't available resp. working on macOS and windows, but on linux it would make the cluster accessible from the outside on the common network device of the host system resp. use the public host IP as External-IP within k8s .

because k8s does almost only utilize private address space internally, this option wouldn't induce much harm -- i personally would say: less than dirty arbitrary port forwarding. nevertheless users could still enable all sorts of direct access to specific nodes and their ports from within k8s by well defined and already available k8s control options in this case.

how do you think about this kind of alternative solution?

andyz-dev commented 5 years ago

@mash-graz, I don't follow the objections raised above. So let me make an attempt to understand better. If the following are obvious, apologies in advance.

If the main goal to implement one host port to one docker port for a k3s cluster.

The following should create the desired cluster: $ k3d create --publish 8080:80

k3s by default comes with traefik ingress controller that listens on port 80. Ingress rules can be configured via the normal k8s way (i.e. via yaml file). Once ingress is configured ,

$ curl http://localhost:8080/xxxxx . should just work.

If one needs a larger cluster, but only one ingress node, something list the follow can work (needs @iwilltry42's changes to merge in).

$ k3d create --publish 8080:80@server --workers 5

If one needs a larger cluster, but two ingress nodes (for HA), then the following can work $ k3d create --publish 8080:80@server@0 --workers 5

If the goal is to provide remote access to k3d cluster. The the following should work, assume the desired remote routable IP.

$ k3d create --publish :8080:80

On the other hand, if the goal is not to provide any remote access to the k3d cluster, but only use it for local development and testing

$ k3d create --publish 127.0.0.1:8080:80

I understand and agree that node port and host port are not commonly used, but they are valid k8s configurations. To be honest, I don't see a down side in supporting them, in addition to the common use cases we mostly care about.

Or probably you have some other ingress use cases in mind?

goffinf commented 5 years ago

I don’t have any objection to supporting nodeport and host port mapping. But I don’t think we should assume that they are the same as Ingress other than in a very generalised sense. I want to be able to support the Kubernetes Ingress object, not a simulation of it.

Currently in k3s (in its docker implementation), Ingress is only supported through a specific port mapping in the compose file and that doesn’t work if you have more than ONE worker. There is an open issue for that, which I was rather hoping that k3d would address given that it’s focus is the docker implementation specifically.

Some of the discourse above has started to venture into much more significant customisation and as others have said this is very likely to give rise to all sorts of edge cases to mange which would move k3d from being a light-Weight wrapper to k3s to something more uncomfortably substantial I think .. not sure .. but I see some early warning signs of creeping complexity.

I still think the role type has some legs since in many cases I don’t want to expose application ports on the server node.

Certainly some of @andyz-dev examples look pretty simple. I’m not 100% convinced about using a node ordinal (a name maybe ok though).

mash-graz commented 5 years ago

your explanation looks correct and plausible, but it still doesn't convince me anymore.

when we started this debate, i also simply thought, that this issue could be solved by a simple workaround and a few injected port forward instructions at least for trivial typical usage scenarios. but after watching your attempts, to stress this possibilities much wider and bypass lots of abstractions and indirection utilized by a clean k8s system, i simply had to change my mind. you somehow demonstrated convincingly why we shouldn't take this path...

if you look at the behavior of k3d without your patch on a linux system, you see, that it is already using ExternalIPs resp. providing access from the host via the docker0 bridge:

k3s$ kubectl get svc -n kube-system 
NAME       CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
kube-dns   10.43.0.10     <none>         53/UDP,53/TCP,9153/TCP       3d
traefik    10.43.227.46   192.168.16.2   80:32448/TCP,443:31643/TCP   3d

and http access works, if you utilize this particular IP:

~$ wget -O- http://192.168.16.2
--2019-05-10 01:51:58--  http://192.168.16.2/
Connecting to 192.168.16.2:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2019-05-10 01:51:58 ERROR 404: Not Found.

it's just a pity, that it is only listening on docker0 and not on our usual network, which would obviate the need for any further routing solution or redirection. but on the other hand, thats exactly what we should expect and like about docker and similar sandboxing environments: it isolates the networks!

so it's really just a question, how we can overcome this particular behavior, without to much side effects and unforeseeable troubles? we don't need a super strong brute force approach, which bypasses most of k8s fundamental network principles, but a nice, clean and simple solution to just enable this communication between the public network and already designated external IP communication endpoints in k8s.

giving all this worries and dissatisfaction concerning port gambling solutions, i'm more and more thinking about this already expressed other alternative. it also looks like a more promising strategy to overcome dynamic changes and reconfiguration within the clusters, which can not be handled in a sufficient manner by our one shot docker configuration attempts.

andyz-dev commented 5 years ago

@goffinf Thanks for the input. Is there a reference to the issue you mentioned on k3s not being able to support more than one ingress node? I'd like to take a closer look and make sure multiple node input works on k3d. We should probably also take a fresh look at the ingress from top down (instead of coding up) to make sure the end result works for the community.

@mash-graz Thanks for the input, I do value them very much and the end solution will be better if we think about them collectively. That's why @iwilltry42 made the pre-release available so that more folks can try it out and provide feed backs. FWIW, from tip of the master, on MAC OS, the show svc of traefik also gave the external IP of docker0, (on MAC docker0 runs inside a VM, of course). Sounds like you are already thinking of an alternative design, If helpful, I will be happy to continue the discussion (may be we can make use of the slack channel), or be a sounding board to help you flush out your design ideas.

iwilltry42 commented 5 years ago

Hey there, now that was a long read :grin: I was a bit busy in the last days, but now I took the time to read through all the comments again to figure out what's the best thing to do.

So as far as I understand the main risk/concern in the current approach is that we expose ports with on each k3d node (containers) with an auto-offset on the host. Since we often don't need this, this could just introduce security issues.

The main goal of this should be to support single hostport to single containerport bindings to e.g. allow ingress or LB port forwarding for local development.

Since the portmapping is the most flexible thing that we can do at creation time without too much engineering extra-overhead, I think we should stick with it, but in a different way then now.

I would still propose to enhance the docker notation with the node specifier, i.e. [<ip>:][<host-port>:]container-port[/<protocol>]@<node-specifier>, where container-port and node-specifier are mandatory (first change). The auto-offset of host ports would then be an optional feature (e.g. --port-auto-offset) and 1:1 port-mapping will be emphasized by enforcing the node-specifier in --publish.

Still, I think that @mash-graz idea of somehow connecting to the docker network is a super cool idea and I think that we might be able to support this at some point when we've had more time for researching it. The network-mode=host feature we could add with a hint that it will only work for Linux users. I think there are more people interested in this (#6).

iwilltry42 commented 5 years ago

After the first easy implementation, we can think of further getting rid of auto-offset, e.g. by allowing port ranges for host-port, so that we will generate a 1:1 mapping out of a single flag. E.g.

mash-graz commented 5 years ago

thanks @iwilltry42 for preparing this patch!

So as far as I understand the main risk/concern in the current approach is that we expose ports with on each k3d node (containers) with an auto-offset on the host. Since we often don't need this, this could just introduce security issues. The main goal of this should be to support single hostport to single containerport bindings to e.g. allow ingress or LB port forwarding for local development.

yes -- that's more or less my point of view too. :)

we should really try to concentrate on this simple one-to-one port mapping (LB/ingress), although the more complex case (auto offset...) should be supported as good as possible.

I would still propose to enhance the docker notation with the node specifier, ... , where container-port and node-specifier are mandatory.

does it really make sense, to require the node-specifier as mandatory field?

if we understand the one-to-one port mapping case as the most common usage scenario, we can simple suppose it as the default mode of operation, as long as no other node specifier is explicitly given on the command line. this would make it easier to use resp. needs less lengthy command line options in practice.

i still see the problem, how the one-to-one port mapping can be specified in an unambiguous manner by the proposed notation? a ...@server will only work for single master clusters, but in the future, when k3s will finally be able to support multi master clusters as well, it unfortunately will not always signify a unique node resp. a one-to-one mapping anymore...

i guess, it's a kind of fundamental misunderstanding or oversimplification, if we presuppose a congruence between docker container entities and k8s more complex network abstraction. both accomplish orthogonal goals by different means. utilizing port mappings/forwarding in workarounds to satisfy some practical access requirements, should be always seen as rather questionable and botchy cutoff.

there are already some interesting tools and improvements of kubectl port-forward available, which seem to solve similar kinds of network routing and node access in a comfortable and k8s idiosyncrasies respecting fashion:

https://github.com/txn2/kubefwd https://github.com/flowerinthenight/kubepfm

they are most likely a more suitable choice, if one wants to handle demanding network access to remote clusters and k3s running within docker or VMs. in comparison with our docker specific approach this kind of solution comes with a few pros and cons:

pros:

cons:

The network-mode=host feature we could add with a hint that it will only work for Linux users.

yes, i still think this variant could be a worthwhile and extraordinary user friendly option on linux machines. i'll try to test it and prepare a PR for this feature as soon as possible.

goffinf commented 5 years ago

I've had limited success with kubefwd, which I also thought might have some legs for this problem (having Kelsey Hightower endorse a product must give it some street creds I suppose). Anyway, my environment is Windows SubSystem for Linux (WSL). I appreciate that's not the case for everyone, but in corporates' its pretty common.

Running kubefwd in a docker container, even providing an absolute path to the kubeconfig just results in connection refused ...

docker run --name fwd -it --rm -v $PWD/.kube/config:/root/.kube/config txn2/kubefwd services -n default
2019/05/13 23:27:21  _          _           __             _
2019/05/13 23:27:21 | | ___   _| |__   ___ / _|_      ____| |
2019/05/13 23:27:21 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/05/13 23:27:21 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/05/13 23:27:21 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/05/13 23:27:21
2019/05/13 23:27:21 Version 1.8.2
2019/05/13 23:27:21 https://github.com/txn2/kubefwd
2019/05/13 23:27:21
2019/05/13 23:27:21 Press [Ctrl-C] to stop forwarding.
2019/05/13 23:27:21 'cat /etc/hosts' to see all host entries.
2019/05/13 23:27:21 Loaded hosts file /etc/hosts
2019/05/13 23:27:21 Hostfile management: Backing up your original hosts file /etc/hosts to /root/hosts.original
2019/05/13 23:27:21 Error forwarding service: Get https://localhost:6443/api/v1/namespaces/default/services: dial tcp 127.0.0.1:6443: connect: connection refused
2019/05/13 23:27:21 Done...

I know that kubeconfig works ...

kubectl --kubeconfig=$PWD/.kube/config get nodes
NAME                       STATUS   ROLES    AGE     VERSION
k3d-k3s-default-server     Ready    <none>   6h38m   v1.14.1-k3s.4
k3d-k3s-default-worker-0   Ready    <none>   6h38m   v1.14.1-k3s.4
k3d-k3s-default-worker-1   Ready    <none>   6h38m   v1.14.1-k3s.4
k3d-k3s-default-worker-2   Ready    <none>   6h38m   v1.14.1-k3s.4

Using the binary was better and did work ...

sudo kubefwd services -n default

2019/05/14 00:32:44  _          _           __             _
2019/05/14 00:32:44 | | ___   _| |__   ___ / _|_      ____| |
2019/05/14 00:32:44 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/05/14 00:32:44 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/05/14 00:32:44 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/05/14 00:32:44
2019/05/14 00:32:44 Version 1.8.2
2019/05/14 00:32:44 https://github.com/txn2/kubefwd
2019/05/14 00:32:44
2019/05/14 00:32:44 Press [Ctrl-C] to stop forwarding.
2019/05/14 00:32:44 'cat /etc/hosts' to see all host entries.
2019/05/14 00:32:44 Loaded hosts file /etc/hosts
2019/05/14 00:32:44 Hostfile management: Original hosts backup already exists at /home/goffinf/hosts.original
2019/05/14 00:32:44 WARNING: No backing pods for service kubernetes in default on cluster .
2019/05/14 00:32:44 Forwarding: nginx-demo:8081 to pod nginx-demo-76d6b7f896-855r2:80

and a curl FROM WITHIN WSL works ...

curl http://nginx-demo:8081
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
...
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

HOWEVER ... http://nginx-demo:8081 is NOT available from the host (from a browser) itself unless you update the Windows hosts file to match the entry in /etc/hosts (WSL will inherit the Windows hosts file on start-up but doesn't add entries to it as it does with /etc/hosts) .... e.g.

You need to add this to /c/Windows/System32/drivers/etc/hosts (which is what kubefwd added to /etc/hosts in this example)

127.1.27.1 nginx-demo nginx-demo.default nginx-demo.default.svc.cluster.local

You can use the 127.1.27.1 IP without altering the Windows hosts files but that not particularly satisfactory ..

e.g. this will work from a browser on the host ... http://127.1.27.1:8081

In some ways this is WORSE than kubectl forward since at least there I can access the service on localhost:8081 without needing to mess with the Windows hosts file.

So TBH, neither of these is especially attractive to me even if they do leverage features native to the platform.

@iwilltry42 I'll try out your patch tomorrow (I have a bit of time off work).

I do agree with much that @mash-graz has said, but I'm minded to at least move forwards even if what is implemented now becomes redundant later on.

mash-graz commented 5 years ago

Running kubefwd in a docker container, even providing an absolute path to the kubeconfig just results in connection refused ...

 docker run --name fwd -it --rm -v $PWD/.kube/config:/root/.kube/config txn2/kubefwd services -n default
 ...
 2019/05/13 23:27:21 Error forwarding service: Get https://localhost:6443/api/v1/namespaces/default/services: dial tcp 127.0.0.1:6443: connect: connection refused

I know that kubeconfig works ...

your kubeconfig seems to use an API server entry, which points to localhost:6443. this will only work on your local machine and not for remote access to your cluster. using virtual machines envrionments or docker sandboxes, have to be seen as kind of remote access in this respect. localhost doesn't connect to the same machine in this case...

just edit the server entry of your kubeconfig and use the IP of your machines network card instead.

concerning the mentioned windows hosts-file synchronization issues, i would answer: yes, it's just another dirty workaround.

but this mechanism does have same important advantages in practice. faked name entries like this are nearly indispensable, if you have to develop and test services behind L7 reverseproxies resp. name based dispatching. just forwarding ports to arbitrary IPs doesn't work in this case. but again: it's just another tricky workaround, and by no means a clean and exemplary solution. ;)

iwilltry42 commented 5 years ago

Thanks for the feedback! @mash-graz So you would make @server the default for the time that we don't have HA (= multiple masters) in k3s? As soon as we get HA mode, we could think of adding a loadbalancer in front so that we don't have multiple host ports open for the same ports on the master nodes. Also, I'll have a look into the two projects you mentioned :+1:

@goffinf , maybe WSL2 will bring you some improvements soon :wink:

mash-graz commented 5 years ago

@mash-graz So you would make @server the default for the time that we don't have HA (= multiple masters) in k3s? As soon as we get HA mode, we could think of adding a loadbalancer in front so that we don't have multiple host ports open for the same ports on the master nodes.

yes -- i definitely would make it the default behavior!

it may look a little bit crazy and ignorant, that i'm still insisting on this particular little implementation detail, but i think it makes indeed an important difference for end-users. in most cases they'll only want to forward LB/ingress http/s access on standard port 80 and 443 on the host sides public network -- that's at least my expectation -- , and this trivial usage scenario should be supported as simple as possible. it shouldn't need any unfamiliar and complex command line options and just work reliable and as expected by common mind out of the box.

Also, I'll have a look into the two projects you mentioned

these other alternatives do not render our port forwarding efforts redundant, because they are more designed to realize save remote access to network services of a cluster, instead of just making ports accessible on the public network -- i.e. they accomplish a slightly different purpose --, nevertheless it's interesting to study, how they overcome some of the obstacles and ambiguities related to this forwarding challenge.

iwilltry42 commented 5 years ago

Your reasoning makes total sense @mash-graz , so I updated the default node to server in my PR #43 UPDATE: need to change a different thing later actually...