Open MichelJansen91 opened 4 years ago
@MichelJansen91 Helm's not really my expertise, but I agree that it seems weird if you'd have to setup TLS listeners on the brokers just because you do setup TLS on the loadbalancer. (I mean the point of offloading TLS is to exactly avoid that).
If you see a solution feel free to propose it. I certainly appreciated that ;)
I'm still figuring out the best solution. I think it is more common to have a Ingress Controller, and configure the termination there. That would solve my problem with the Helm chart, since it can just be configured for normal MQTT traffic on port 1883 using its cluster IP, and I think that most users with Kubernetes cluster would use it this way.
The problem however (as also mentioned in the documentation) is that many controllers focus on HTTP (layer 7) traffic instead of TCP. I found a nice overview of Kubernetes Ingress Controllers here.
From that overview I can see that all non-nginx based controllers support tcp+tls, but I am also trying to figure out how the interaction between Ingress Controllers with VerneMQ would be in order to support client / mutual authentication for TLS (as we want to implement that later on).
Although that would shift us a bit from the original topic: Could you give explain a little bit more about the client verification by VerneMQ? From what I understand so far, it seems to me that the LoadBalancer/ Ingress Controller could do the TLS termination together with the PROXY protocol. Would VerneMQ do the client verification?
The listener.tcp.proxy_protocol_use_cn_as_username = on
suggests that client certificate authentication is done after the TLS termination, but I thought so far that TLS termination should be done at the same time as client verification.
@MichelJansen91 it's comparable to a 2FA. Client comes with a token (= client cert) in hand. This is verfied entirely by the Ingress proxy. The proxy forwards the common name to Verne. Verne treats this common name as a username, looks for a password in the MQTT CONNECT payload, and does a secret based verification (something you have, something you know).
That's most certainly not the correct "lingo" but just how I explain it to myself ;)
If you want to skip the second step, you actually have to skip VerneMQ authentication completely by setting allow_anonymous=on
.
(which is somewhat ugly because if you forget that and go ahead starting up normal TCP MQTT listeners later, they'll be non-authenticated too, so be aware).
@ioolkos Thanks for the clarification. Probably it is also required for some kind of authorization, since authentication and authorization are kind of mixed in VerneMQ right?
I have searched a bit more for mutual / client authentication, but it seems quite hard to find because:
So far I have not been able to find a TCP proxy that supports client certificate authentication for TCP, however I can't be the only one that would like to use VerneMQ cluster with mutual TLS. Are you guys aware of a proxy that does support that? The documentation suggests using Voyager for TCP TLS termination, but as far as I know it does not support mutual TLS (actually all proxies I found so far support this only for HTTP traffic).
@MichelJansen91 I can't consult in depth on Voyager.
For HAProxy to be used as a TLS offloading server (with client cert verification) seems to be possible. You are talking about the TLS offloading scenario, right? as described here:
https://www.haproxy.com/documentation/haproxy/deployment-guides/tls-infrastructure/#ssl-tls-offloading
If, on the other hand, you want to verify client certs within Verne, you have to setup a TLS endpoint and a couple of configs in the vernemq.conf
file.
If you want to extract the CN on the proxy and do further authentication or authorization on the proxy with that CN, that's entirely up to what's possible on the proxy. Not sure what kind of ACLs etc are possible there.
@ioolkos Thanks for your quick reply. From the link that you provided I can't find anything about client cert verification. TLS offloading can be done by most proxies, but it seems to me that this is only decrypting the traffic without any verification on client (or maybe I am misunderstanding the offloading).
If there are no proxies available, I will consider to verify them within VerneMQ like you suggested. But using a proxy would help ease the maintenance on the certificates renewal etc.
Hi @MichelJansen91 Did you get this work? We are having a similar problem. Please let us know how did you solve this?
@kgsakthivelmurugan Eventually I decided to terminate the SSL inside the pods. I am only using the cluster for testing, so no heavy loads on the pods so far that offloading makes sense.
As recommended in the VerneMQ Kubernetes documentation the TLS should preferably not be terminated by the VerneMQ pods. Therefore I wanted to terminate them at a LoadBalancer setup by Kubernetes. Recently AWS supports TLS termination on Network Load Balancers , and I tried to set it up using the vernemq Helm chart. Unfortunately I couldn't get it to work fully, because I don't want to open 1883 port on the load balancer for external traffic.
The following configuration for the Kubernetes Service creates a load balancer succesfully, but it also forwards traffic on port 1883 to the VerneMQ cluster target group:
The problem is however that the nodes think that they need to setup TLS configuration as well:
[error] CRASH REPORT Process <0.630.0> with 1 neighbours crashed with reason: bad argument in call to erlang:binary_to_list(undefined) in ssl_config:file_error/2 line 153
I checked the template file
service.yaml
to check what is happening, but it seems that there is currently no way to offload the TLS to a load balancer while not setting up the TLS on the nodes itself.Could somebody confirm if this is indeed the case, or am I missing something? I am considering to create a pull-request, but I am not sure if that is appreciated? If so, please let me know!