cloudnativelabs / kube-router

Kube-router, a turnkey solution for Kubernetes networking.
https://kube-router.io
Apache License 2.0
2.32k stars 469 forks source link

Explore kube-router as ingress controller #30

Closed bzub closed 6 years ago

bzub commented 7 years ago

From the TODO:

explore integration of an ingress controller so Kube-router will be one complete solution for both east-west and north-south traffic

bzub commented 7 years ago

I wonder what unique features a new ingress controller could bring to Kubernetes? I suppose since kube-router uses IPVS loadbalancing without configuration at the service proxy level that it could serve as a very simple to configure ingress controller. Others can be difficult to configure (nginx, haproxy, etc) since they are not native to kubernetes.

To pursue this I think we would need to pull in and use an HTTP library for host-based routing. Caddy and gorilla libraries come to mind, if not the Go standard library http pkg.

Also, this presentation seems relevant: https://blog.codeship.com/kernel-load-balancing-for-docker-containers-using-ipvs/

murali-reddy commented 7 years ago

"I wonder what unique features a new ingress controller could bring to Kubernetes?"

I sort of agree to your concern. There are number of exisitng dedicated ingress controllers available, functionally we may not add anything extra.

But its one more moving peice user has to understand how to deploy and operate. I kind of feel each of current Kubernetes networking solutions solve a pointed problem. Users have to deploy different solutions for pod networking, east-west traffic service load balancing/proxy, network policies and for north-south traffic have to deploy ingress controller. Its just over complicated i feel.

Reason i added in TODO, was if kube-router can just do essential L7 routing/ssl termination we might just have one cohesive solution dealing with north-south, east-west traffic requirements.

Please weigh in pros and cons to see if its worth the effort.

bzub commented 7 years ago

I think you're right. Many of the basic features needed to implement an Ingress Controller are already in kube-router, and it would be great if a user started out with a cluster that supported it with this one core component.

murali-reddy commented 7 years ago

sorry if it was not clear from my earlier comment. Ingress controller is pretty involved piece of work as it requires L7 routting, SSL termination etc. As you know IPVS is just L4 load balancer it's not meant for that, so its not suited for Ingress controller. We have ingress controllers built on HaProxy, Nginx, Envoy Træfik, etc and there are services mesh like linkerd, istio etc which can ingress as well. I was suggesting "weigh in pros and cons to see if its worth the effort." so that it wont be just another ingress controller, but solve some problem (operational simplicity as all-in-one solution, was one reason i was suggesting to have ingress controller in Kube-router).

bzub commented 7 years ago

True, the L7 aspect is not as simple as I made it seem. I do think with the help of a mature HTTP library and integrating kube-lego that the implementation will be less painful.

Still I think it's worth pursuing, I will give it a shot. Getting an Ingress Controller out of the box would be a huge feature and possibly on-par or superior to other controllers in terms of performance. It would also go hand-in-hand with a global load-balancer as depicted in #10.

murali-reddy commented 7 years ago

Yes, indeed its worth pursuing.

murali-reddy commented 7 years ago

I will take stab and work towards a prototype to see how it fits in.

hwinkel commented 7 years ago

And as we work on L4 with IPVS, an IPVS based ingress could act as UDP ingress too. Despite the fact the ingress API model is very HTTP centric and has never thought about other protocols. the Service API is much more rich here.

murali-reddy commented 7 years ago

@hwinkel thanks for your comment. Agree current Ingress definition is very L7 centric and though there is a discussion to support L4 I suspect it will happen any time soon. We are looking for inputs on what is the best way forward w.r.t ingress in Kube-router that will be beneficial to Kuberentes users.

hwinkel commented 7 years ago

Would be Good, in the meantime may annotation to ingress resource can help. As we are providing network services, namely Udp based or l3 based packet services, we would like to use somerhing like ingress for l3 or l4 balancing or traffic steering to pods. The steering sometimes needs a bit more clever then pure routing or round Robin, like based on source IP or given tunnel id's in the Udp protocols. That's why we think about a l3/l4 or Udp l7 ingress

murali-reddy commented 7 years ago

@hwinkel as of now, Kubernetes Ingress resource would not fit in to L4/L3 (for e.g lack of port details etc). I was thinking to add custom resource with CRD's that can provide custom ingress resource and kube-router can implement an ingress controller based on IPVS. I will give a thought and come up with a proposal.

@bzub @Thoro if you have any thoughts on L4 ingress please share.

murali-reddy commented 7 years ago

Couple of pointers where some discussion already happened

https://github.com/kubernetes/kubernetes/issues/23291 https://github.com/kubernetes/kubernetes/pull/25821 https://github.com/kubernetes/ingress/tree/master/controllers/nginx#exposing-tcp-services

hwinkel commented 7 years ago

I'm Awre of the kubernetes discussions, and made the point that http is not enough. Further I assume that crds will exists if you have different ingress requirements. For me is unclear when you will leave the scope of ingress and you enter a specific 'loadbalacer' proxy or frontend application distributes the requests across backend services. But I think there is headroom to extend ingress. There is a good example about a more powerful ingress controller with a lot of options, Ill add here. (currently typing on a mobile)

murali-reddy commented 7 years ago

sure @hwinkel plese share your thoughts

hwinkel commented 7 years ago

hi, back on normal keyboard. as mentioned above, what i'm thinking is to extend ingress capabilities for a UDP based ingress with annotations, the same way other more sophisticated ingress controllers like voyager are doing.

https://github.com/appscode/voyager

Hopefully the stock ingress resource will learn soon how to deal with some not so HTTP oriented scenarios like L7 UDP Protocols, L4 UDP.

Further I'm interested to discuss how even a L3 (or L2?) model could be mapped to the idea of ingress, this brings me to the idea of SDN for traffic steering, nothing more (from a resource and API perspective) is a ingress is doing.

murali-reddy commented 7 years ago

I am summarizing discussion we had over gitter and some rough thoughts. Taking a holistic look at the ingress and some of the practices done by web-scale companies as described in #88 there are three parts or related issues for ingress. some of the issues are specific to on-premise clusters

SEJeff commented 7 years ago

For L7 http routing, may I suggest httprouter which is fast due to using a Trie for routing O(1) instead of a list of regular expressions O(n) complexity in lookup time.

Alternatively, look up chi which is what cloudflare / heroku use for their production url routing (it also uses a Trie).

SEJeff commented 7 years ago

Additionally, one feature critical to any serious ingress controller would be integrated opentracing support. Several of the more commonly used ingress controllers support this. The nginx ingress just added support for it in the most recent beta release

zipkin go client libs jaeger go client libs

Having an instrumented distributed application really helps application owners troubleshoot things. Having this as part of kube-router natively (in addition to the prometheus stats) would be incredible.

murali-reddy commented 7 years ago

@SEJeff thats a nice feedback. i have opened seperate issue #181

murali-reddy commented 6 years ago

Given the momentum and possible hook into istio, it does make sense to integrate Envoy. Closing this issue in favour of #130