Closed VMmore closed 1 week ago
We use Contour as a DS hostnet, initially split (pre-0.14 it was insecure, but ran on tainted firewalled nodes)
We have since merged contour back with envoy pod to make the xDS connection more stable
We are using externalDNS and thus still rely on ingress
objects instead of ingressRoute
objects
I have been playing with different configurations for Kubernetes Clusters to run my companies and my customer's services.
Former Setup: GKE 3 x n1-standard-4 nodes Istio Cert Manager Percona Cluster 10gb SSD NFS Server 10GB regional SSD (Wordpress Customers (Customers Not Yet Deployed) Average 15GB traffic on load balancer Monthly Price: £314.50 GBP by Googles Pricing Calculator
New Setup: GKE 3 x g1-small Preemptiable Nodes Estaffete Preemptiable Killer Cillium CNI Contour Proxy Cert-Manager Percona Cluster 10GB SSD NFS Server 10GB regional SSD (Wordpress Customers (Customers Not Yet Deployed (In place ready for them))) Traffic is expected to be the same for both clusters Monthly Price: £38.87 GBP by Googles Pricing Calculator
Neither takes into account how much CPU or Ram is going to be needed for the WordPress customers whom I will be starting to migrate very shortly, but just the infrastructure side alone seems to be a massive price difference.
Cillium is used for network policy, cluster mesh and BPF performance over IPTables.
I will update comment as all customers and internal microservices are fully migrated.
I have been playing with different configurations for Kubernetes Clusters to run my companies and my customer's services.
Former Setup: GKE 3 x n1-standard-4 nodes Istio Cert Manager Percona Cluster 10gb SSD NFS Server 10GB regional SSD (Wordpress Customers (Customers Not Yet Deployed) Average 15GB traffic on load balancer Monthly Price: £314.50 GBP by Googles Pricing Calculator
New Setup: GKE 3 x g1-small Preemptiable Nodes Estaffete Preemptiable Killer Cillium CNI Contour Proxy Cert-Manager Percona Cluster 10GB SSD NFS Server 10GB regional SSD (Wordpress Customers (Customers Not Yet Deployed (In place ready for them))) Traffic is expected to be the same for both clusters Monthly Price: £38.87 GBP by Googles Pricing Calculator
Neither takes into account how much CPU or Ram is going to be needed for the WordPress customers whom I will be starting to migrate very shortly, but just the infrastructure side alone seems to be a massive price difference.
Cillium is used for network policy, cluster mesh and BPF performance over IPTables.
I will update comment as all customers and internal microservices are fully migrated.
this is great news and good savings @PeopleRange
Replaced our company production bare metal NGINX ingress and could not be happier.
NGINX ingress likes to kill existing connections whenever Ingress resources changed, causing some reliability issues because our Development, Staging, and Production pods are located in the same cluster for development agility.
As far as we observed, Contour does not do this. (Or it does but we didn't notice 😃)
My company has been running k8s for over 3 years now. We run in AWS GovCloud so EKS still isn't an option for us and it certainly wasn't when we started. We've historically used a combination of ELBs and ALBs to ingress traffic to our services. It was expensive and burdensome to maintain. Everytime a service needed to ingress traffic, we'd have to roll out infrastructure (CloudFormation) changes.
We started exploring Contour late last year and I was able to roll it out and configure it relatively easily. We expose Envoy on a couple of hostPorts fronted by an NLB. Application owners expose their services via HTTPProxy resources. All of our ingress is handled by Contour as of last week and we've started deleting the "legacy" load balancers. As a result, we're expecting to save %5-10 off our our AWS bill.
My company has been running k8s for over 3 years now. We run in AWS GovCloud so EKS still isn't an option for us and it certainly wasn't when we started. We've historically used a combination of ELBs and ALBs to ingress traffic to our services. It was expensive and burdensome to maintain. Everytime a service needed to ingress traffic, we'd have to roll out infrastructure (CloudFormation) changes.
We started exploring Contour late last year and I was able to roll it out and configure it relatively easily. We expose Envoy on a couple of hostPorts fronted by an NLB. Application owners expose their services via HTTPProxy resources. All of our ingress is handled by Contour as of last week and we've started deleting the "legacy" load balancers. As a result, we're expecting to save %5-10 off our our AWS bill.
this is great @JTarasovic . thank you for sharing
Our team is pretty new to Kubernetes and we tried probably 4-5 Ingress controllers (Traefik, Gloo, Nginx, Kong, etc), we are really looking for that "perfect" and minimalistic kubernetes configuration. Contour was by far the easiest to get running and gave us the most control of the deployment. The documentation is consistent and makes it easy for teams to adapt. There are some great videos on YouTube (https://www.youtube.com/watch?v=O7HfkgzD7Z0) and Contour appears to have more functionality that we originally thought we might want in an Ingress Controller. Things we love about Contour so far:
Cons:
Our team is pretty new to Kubernetes and we tried probably 4-5 Ingress controllers (Traefik, Gloo, Nginx, Kong, etc), we are really looking for that "perfect" and minimalistic kubernetes configuration. Contour was by far the easiest to get running and gave us the most control of the deployment. The documentation is consistent and makes it easy for teams to adapt. There are some great videos on YouTube (https://www.youtube.com/watch?v=O7HfkgzD7Z0) and Contour appears to have more functionality that we originally thought we might want in an Ingress Controller. Things we love about Contour so far:
- Simple, YAML-based Deployment (no *ctl tool required)
- Clear, concise yet powerful CRDs with lots of options
- The ability to namespace our route definitions (although I have not tried this one yet)
- Documentation that actually works and got us up and running quickly
- Based on envoy, a CNCF project
Cons:
- No cons yet! But if there are any needs in the future we'll make sure to submit issues
thank you so much for the kind words @ccravens . Welcome to our community!
Hi, I'm new to k8s for a personal project to learn it along with how a modern web app / backend is built with open source tools -- nothing related to my day job. I've tried the nginx and Istio Gateway ingresses, and read about the feature set of the others. The key features that seem like the reasons I'll continue with Contour, and the others lack, are:
Not sure if there are issues for these already, but some things that in my limited use/experience seem lacking are:
Thanks for the comment @kriswuollett, and the reasons for using Contour.
To address your two points:
Something like this:
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: crossnamespace
namespace: projectcontour
spec:
virtualhost:
fqdn: crossnamespace.youngnick.dev
includes:
- name: includedproxy
namespace: default
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: includedproxy
namespace: default
spec:
routes:
- services:
- name: httpbin
port: 80
Hi @youngnick, thanks for getting back to me! I ended up figuring out the cross-namespace setup. Along with the client TLS option, Contour is working fine for me.
The only thing I can think of that I'd be interested in the future is the existing feature request for rate limiting -- with it for some use cases I'd drop TLS client verification. Ideally traffic should be stopped at ingress if possible?
Current workarounds seem to be deploying envoy sidecars, waiting for the external auth feature to land in which I could bundle in a call to a limiter, or perhaps deploying Linkerd if they add it to their Service Profiles. Istio had deprecated their mixer policy, and don't have time to investigate their EnvoyFilter
s yet.
Contour makes it very easy to properly expose our Kubernetes services. But more than the technology is the team. I have not worked with an open source project that is so dedicated to ensuring users have all of the functionality they require as well as the support to get it working. Thanks to the Contour team for your work and support!
Contour makes it very easy to properly expose our Kubernetes services. But more than the technology is the team. I have not worked with an open source project that is so dedicated to ensuring users have all of the functionality they require as well as the support to get it working. Thanks to the Contour team for your work and support!
thank you @ccravens!
Today I used Contour to launch https://10minutepleroma.com/, which lets users deploy a social media server on a subdomain for 10 minutes by clicking a button.
It was important I had the ability to manage routes as resources (HTTPProxy) since people could provision servers simultaneously, and I did not want to deal with the race conditions from rewriting Ingress rules.
Most of the magic happens here: https://gitlab.com/tribes-host/10minutepleroma/-/blob/develop/lib/ten_minute_pleroma/deploy.ex
Thanks for the excellent project!
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
You can:
Please send feedback to the #contour channel in the Kubernetes Slack
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
You can:
Please send feedback to the #contour channel in the Kubernetes Slack
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
You can:
Please send feedback to the #contour channel in the Kubernetes Slack
This really isn't an issue, but we'd love to hear about your use of Contour so we thought we'd post this to find out more from you.
Please feel free to leave a comment below and let us know. You can also go a step further and update the adopters file https://github.com/projectcontour/contour/blob/master/ADOPTERS.md