Closed whiskeysierra closed 2 years ago
Hi @whiskeysierra, chiming in as a Consul Kubernetes PM here. Thanks for providing the feedback and I do think Ingress is an important issue to understand and also support. We have traditionally relied on partners to help with such integrations, as it would help with supportability in the long term. Do you have an ingress controller in mind that you already use today? Or would you prefer to use the Consul Ingress GW that provides additional features for validating certs for each hostname?
Also as far as adding additional features that can validate certs for additional hostnames, we have considered adding such options in the future, however they are currently not prioritized.
Do you have an ingress controller in mind that you already use today?
That's what I tried to convey before. I don't believe that it's a viable approach to add support for a specific ingress controller. Instead, it would be beneficial to kubernetes ingress + consul connect users if there would be a solution that works across different ingress controllers. Personally I like HAProxy-based controllers, e.g. https://haproxy-ingress.github.io/ (/cc @jcmoraisjr), just because HAProxy (maybe also nginx) is battle-tested and rather rock-solid now (after years/decades of development).
Or would you prefer to use the Consul Ingress GW that provides additional features for validating certs for each hostname?
I don't think that Consul Ingress Gateways should try to compete with Ingress Controllers. The list of available ingress controllers is already too big as it stands today: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
Ingress Gateways have their purpose, but I'd expect to use them outside of Kubernetes. Trying to catch up with features of existing Ingress Controllers (or full-blown API gateways for that matter) is a game of catch-up that is almost impossible to win. TLS termination with multiple certificates/hosts is just the tip of the iceberg here.
If it would be my decision to make, for the time being, I'd focus on the following:
The localhost
ExternalName Service trick (see https://github.com/hashicorp/consul-k8s/issues/21#issuecomment-443823996) should be documented to ease the pain for Connect users running on Kubernetes.
Afterwards, we may think about possibilities how to make replace this trick with a cleaner solution. Just to elaborate a bit on the ideas mentioned here and over at #21:
localhost
insteadmy-service.service.consul
...my-service.my-namespace.svc.cluster.local
Maybe someone else has other ideas. Happy to get the brainstorming rolling on this.
@whiskeysierra It does make sense to build an interface layer to make it easier for other Ingress solutions to integrate. We are working on some functionality in our next release that may help get started so I've asked @Blake to chime in here. This will be on our radar but I can't say this is currently prioritized at the moment.
I believe https://github.com/hashicorp/consul-k8s/issues/23 describes the perfect solution. It would remove the need for the external name service trick and would work with any ingress controller.
@whiskeysierra We're targeting to have transparent proxy support available in Consul 1.10. As you mentioned, that feature should allow most any ingress to be used with Consul service mesh.
It would be great to get your feedback on the transparent proxy feature, and whether it allows you to successfully deploy your ingress. I'll follow up on this thread to notify you once the Consul 1.10 beta release is available.
Where can I read more about transparent proxy support ? I'm already using consul:1.10.0-alpha.
I think the current state of ingress gateway is quite redundant, since I have to register it in helm release and configure via CRD. I would prefer to skip helm registration allowing it to be fully dynamic or the ability to use multiple CRD to configure the same ingress deployment.
Is something like following possible?
apiVersion: consul.hashicorp.com/v1alpha1
kind: IngressGateway
metadata:
name: serviceA
namespace: default
spec:
selector:
name: main-ingress-gateway
listeners:
- port: 8080
protocol: http
services:
- name: serviceA
hosts: ["a.service.mydomain"]
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: IngressGateway
metadata:
name: serviceB
namespace: default
spec:
selector:
name: main-ingress-gateway
listeners:
- port: 8080
protocol: http
services:
- name: serviceB
hosts: ["b.service.mydomain"]
Allows me use any name for the manifest but still targeting the same gateway deployment.
Where can I read more about transparent proxy support ? I'm already using consul:1.10.0-alpha.
The k8s support is incomplete. It will be in our next release along with docs but there's nothing yet.
I think the current state of ingress gateway is quite redundant.
We totally agree. We do have full CRD support for ingress-gateways (without any Helm) on our backlog but there is no support for it currently. The reason is that currently CRDs only configure Consul and do not spin up any kube resources. All kube resource management is done through the Helm chart. Obviously the UX for this is poor and so we do want to fix this, but just giving you the background for why it is that way currently.
I don't think there's a GitHub issue currently tracking that feature request so you're welcome to create one.
Is something like following possible?
No, each IngressGateway CRD must configure a single gateway right now.
Thank you @lkysow for your quick reply.
I believe the best option for me will be to ignore the consul helm ingressGetways.gateways
and and adapt this deployment in an application level helm chart.
Does ingress gateway deployments have to be in the same namespace as consul server and controller? I'm planning the my custom implementation, if you have any tip for what can go wrong I'll be grateful.
namespace: {{ $root.Release.Namespace }} -k8s-namespace={{ $root.Release.Namespace }}
Does ingress gateway deployments have to be in the same namespace as consul server and controller?
If you have ACLs or TLS enabled then the ingress gateway will need access to those secrets that are only available in the consul install namespace. You could duplicate those over to your NS manually.
The controller ignores the kube namespace of CRDs (https://www.consul.io/docs/k8s/crds#kubernetes-namespaces) so that can be in any namespace.
@manobi please move further discussion to Discuss or another GitHub issue so as to not pollute this issue with unrelated discussions.
Does ingress gateway deployments have to be in the same namespace as consul server and controller?
If you have ACLs or TLS enabled then the ingress gateway will need access to those secrets that are only available in the consul install namespace. You could duplicate those over to your NS manually.
The controller ignores the kube namespace of CRDs (https://www.consul.io/docs/k8s/crds#kubernetes-namespaces) so that can be in any namespace.
In that case I'm not sure if it's worth the effort. I have also tried to use wildcard ingress gateway but then it only accepts *.ingress hosts.
I will draw a diagram and share my nginx ingress architecture and hope for someone suggestions.
Hi all, I'm having some issues with trying to use ingress objects into my services using the newly launched transparent proxy.
I think they might be relevant to this thread as I'm attempting to use an AWS Application Load Balancer pointing to my k8s services.
I can't get past the target healthcheck, presumably because the envoy proxy doesn't like the connection. I tried exposing an HTTP path for the healthcheck to no avail. I'd just like to figure out what I'm missing here. I'm having some trouble trying to figure out where things are going south. Is it to do with the exposed path, or is using an ALB for ingress into the mesh something that I can't accomplish at this time?
Any guidance is appreciated, thanks!
Hi @zachfeldman-koneksahealth , and anyone else listening! As far as the health checks, if you're using the latest consul+consul-k8s releases this should be automatically mutated for you using transparent proxy and you should'nt have to worry about that, it will also automatically set up the expose paths based on your annotations. We've just updated the consul.io docs to include additional information regarding setup of ingress controllers on consul & k8s: https://www.consul.io/docs/k8s/connect/ingress-controllers
Depending on which ingress controller you're using we have a couple examples at the end. I hope that helps, @zachfeldman-koneksahealth if you still cannot get it up and running on the latest releases do feel free to open a new issue so we can try to help! Cheers
@kschoche thanks for the docs, is there a list of compatible ingress controllers? is consul compatible with aws-load-balancer-controller?
Hi @zachfeldman-koneksahealth - right now we don't have a set of "supported controllers", we're working towards that, but I've personally seen Kong and Traefik and Nginx working, examples for Traefik/Kong are at the bottom of the doc I linked above and Nginx is coming soon. I took a look at aws-load-balancer-controller and I'm not totally sure if it will work in it's current state using the guidance above, as I don't see a way to attach a proxy sidecar to the controller itself. It's worth some investigation though.
yeah @kschoche I attempted to inject the controller to no avail. I don't think the integration works with the aws-load-balancer-controller as described in the documentation. I was able to expose an http endpoint on the sidecar to pass the alb health checks, but the actual traffic returned the understandable error:
[source/extensions/transport_sockets/tls/ssl_socket.cc:219] [C1732] TLS error: 268435648:SSL routines:OPENSSL_internal:PEER_DID_NOT_RETURN_A_CERTIFICATE
I assume this is happening because the load balancer that is sending traffic to the pod has no certificate signed by the CA consul is using for mTLS.
How can I go about making a feature request / investigation request to see if it's possible to configure consul connect using these aws application load balancers?
Hi @zachfeldman-koneksahealth yeah unfortunately the docs only cover the use case of third party controllers where you're able to inject a sidecar. I think this would make a good feature request, Since this is a little different from the original question I'd recommend you file a new ticket with as much detail as you can and we'll prioritize accordingly. Feel free to reference this issue in that ticket as well. Cheers.
thanks @kschoche - I opened #556 in relation to this conversation
Closing as this issue has been mostly addressed via https://www.consul.io/docs/k8s/connect/ingress-controllers. If you have other requirements you would like us to consider please open up a new issue.
We also now have a dedicated ingress option as well via Consul API GW which implements the Kubernetes API Gateway spec.
Is your feature request related to a problem? Please describe.
Consul Connect has no clear interoperability with existing Kubernetes Ingress Controllers. (See discussion in https://github.com/hashicorp/consul-k8s/issues/21#issuecomment-769726419)
Feature Description
A cleaner version of https://github.com/hashicorp/consul-k8s/issues/21#issuecomment-443823996 (ExternalName Service that uses
localhost
to resolve the local connect proxy instead of the real Kubernetes Service IP.Possible Ideas:
Use Case(s)
Consul Connect + Kubernetes users may want to choose any of the existing, well-maintained Kubernetes Ingress Controllers which offers features beyond of what Consul Ingress Gateways can (and should?!) offer. Ideally there would be a clean way or at least a clear documentation on how to successfully operate Consul Connect on Kubernetes and integrate it with any (!) Ingress Controllers.
The current approach of trying to get Ingress Controllers add specific Consul Connect support (e.g. Ambassador and Traefik) doesn't seem very appealing to me. There are many Ingress Controllers out there, so that would take a while. Also, for maintainers of Ingress Controllers it's not very appealing to add specific support for one Service Mesh. Maybe SMI solves this issue in the future, I'm not sure where the SMI spec is going.
Contributions
Are you able to contribute the changes to make this feature work?
Potentially, but I can't promise anything.