fabiolb / fabio

Consul Load-Balancing made simple
https://fabiolb.net
MIT License
7.25k stars 621 forks source link

Propagate FQDN for TLS verification with proto=https #362

Open deuch opened 6 years ago

deuch commented 6 years ago

Hello,

We'are using Fabio 1.5.2 and we host many apis with it.

We have a rule to be able to serve api from fabio regarding FQDN and API name:

myFQDN.society.com/apiname/version

For eg:

analysis.mycompany.com/risk/v1 analysis.mycompany.com/risk/v1.2 analysis.mycompany.com/calculation/v1 computation.mycompany.com/schedule/v1

To avoid generate certificate for each version or new FQDN, we choose this rule.

To be served by Fabio in full HTTPS (HTTPS to fabio and HTTPS to he backend) we need to set proto=https to be able to choose the right backend regarding the all URL (with path and context). SNI doesn't work in our case because the path is not taken in account.

Our backend are containers, connected to the same overlay network than Fabio. So we register the IP of the overlay network in consul. It works but we have to add tlsskipverify=true to have it works. Indeed, with tlsverification enabled, it failed because the certificate doesn"t have the ip of the container in it's SAN list. And it start to be difficult to regenerate certificates each time we scale up/down or redeploy the service in Docker.

So, is it possible, in a sort of passtrough way to verify that the backend certificate has the same FQDN that was used to reach the fabio ? In fact we are using the same certificate for Fabio and the Backend (we can not use PKI features of Vault because of security restriction, but certificates are stored as secrets).

So the idea is to have a new parameter to set fabio to use (per source/FQDN) the source/FQDN for tslverification of the backend and not the backend itself IP/name. And not for all the routes, but only those which need this behaviour (a tag option in consul ?)

Maybe it's already the case but i just saw a global parameter (proxy.tls.header.value ???) and not a per source based.

Thanks for reading me :)

magiconair commented 6 years ago

Hi @deuch, this is what I understand so far:

internet -> https://foo.com/bar -> fabio -> https://1.2.3.4/bar (Host: foo.com)

The cert is for foo.com but not for 1.2.3.4 and setting the Host header isn't sufficient. I would need to dig but can't you add 1.0.0.0/8 to the cert?

deuch commented 6 years ago

Hello,

Our fabio are not exposed to internet so it's more like that:

Load Balancer (Appliance) https://foo.com/bar --> Fabio (4 instances at least) --> https://1.2.3.4/bar

We can not use wildcard for our certificates (forbidden by the security) in a banking context.

Fabio and services are containers running on the same platform. For each application, we deploy a Load Balancer (Appliance), 4 Fabio instances on a dedicated overlay network and the services (containers too) are connected to this overlay network.

On the same platform we have multiple time this setup for each application and environment (dev, int, uat ...) So i can not use a 10.0.0.0/8 wildcard certificate (because it can not be generated by our PKI) and it will break the multi-tenancy of the platform.

We are using a mutualized platform for running containers of many applications.

So to be sure that a service is served by the right Fabio (or to ensure that Fabio serve the right service), it will be a good thing to check the CN of the certificate backend and not it's IP. Of course it's not the normal behaviour and must be an option for some use case like mine. The normal behaviour is to check the CN/SAN of the back-end with the back-end FQDN registered in consul (for my use case an IP).

With containers, generate a certificate each time a container is created its a too heavy operation and difficult to maintain (certificate revocation will be a nightmare ...). I think that i'm not the only one in that case :)

magiconair commented 6 years ago

Would the host=dst from #294 help? (I really need to work on the docs)

deuch commented 6 years ago

I'm not sure to understand the behaviour of this : With host=dst, what will be the header when connecting the upstream ? The upstream hostname/ip ? Or the hostname header coming to Fabio ?

magiconair commented 6 years ago

host=dst is for the reverse proxy case. Let me think about this a bit more.

magiconair commented 6 years ago

host=dst will effectively swap the hostname in the upstream request:

 https://a.com/foo -> fabio -> https://b.com/foo 'Host: b.com' (with host=dst)

host=dst will use the consul service address for the Host header and the target URL. If your container has a cert for its ip and that cert is trusted by fabio then this should work. But creating a cert per container is something you can't do right now.

For HTTPS, it isn't sufficient to set the Host header since it is transmitted after the TLS handshake. The right server name needs to be in the request URL, i.e. https://a.com/foo (TLS server name: a.com) and not https://1.2.3.4/foo Host: a.com (TLS server name: 1.2.3.4).

In essence, you want fabio to make the upstream request with the original hostname (e.g. a.com) since the cert in the docker container contains that name.

fabio could either spoof the DNS lookup for that request or try to establish the TCP connection first and then run the TLS handshake with the original server name (which circumvents the DNS lookup).

This would then allow you to re-use the same cert on all upstream servers.

Does that make sense?

deuch commented 6 years ago

Yes it makes sense. In my use case, the TLS server name has to be the original hostname indeed.

I do not know what is the best thing to do. DNS spoofing or TCP connection first and TLS handshake after. What is the most secure ?

deuch commented 6 years ago

Hello, did you have time to try some stuff for this use case ?

magiconair commented 6 years ago

not yet. sorry.