lucaslorentz / caddy-docker-proxy

Caddy as a reverse proxy for Docker
MIT License
2.81k stars 168 forks source link

Can I use the network name instead of IP range to define controller network? #286

Open heapdavid opened 3 years ago

heapdavid commented 3 years ago

Rather than reserve a specific network range i think it would be much more straightforward to be able to configure the caddy controller something like this:

      - CADDY_CONTROLLER_NETWORK_NAME=caddy-controller
...
networks:
    caddy-controller:
        internal: true

Is this possible? It should then be able to figure out the range automatically.

lucaslorentz commented 3 years ago

CADDY_CONTROLLER_NETWORK only needs to be defined when executing separate "controller" and "server".

The "controller" part connects to docker API and it could use it to find the network IP ranges for a specific docker network name. We could create a CADDY_CONTROLLER_NETWORK_NAME that works on the controller side.

But the "server" part is meant to not depend on docker API, therefore docker network name is not very useful here. It needs the IP ranges in order to allow traffic only from that range. We can't use CADDY_CONTROLLER_NETWORK_NAME here, but we could allow traffic from any IP range if that doesn't concern you.

heapdavid commented 3 years ago

Hi,

Thanks for getting back to me - this is for a stack with a separate controller and 3 servers.

The controller is only connected to the caddy-controller network, and therefore isn't using the CADDY_CONTROLLER_NETWORK variable.

The servers are connected to the ingress network and the caddy-controller network.

We have an overall /17 network which is assigned to docker, so all connections would be from that range, but when I use that on the servers like this:

 - CADDY_CONTROLLER_NETWORK=172.x.x.0/17

 ...
 networks:
    caddy-controller:
        internal: true

...I get connection refused in the logs.

Does the CIDR range in the CADDY_CONTROLLER_NETWORK variable need to be the same (i.e. a /24) as the network? Or is it just a whitelist that should allow anything in the /17 to connect if specified?

lucaslorentz commented 3 years ago

It doesn't need to be exactly the same. It uses the CIDR to know on which interface caddy should bind to listen for controller configs: https://github.com/lucaslorentz/caddy-docker-proxy/blob/4893c4dbcfbdfefca0a6aa6510d79a42536e6c8a/plugin/cmd.go#L112

You need to make sure you use a CIDR that covers caddy-controller network, but doesn't cover ingress. Otherwise, it might mistakenly end up binding configs to ingress network.

lucaslorentz commented 3 years ago

I will try later to set it to "tcp/0.0.0.0:2019" That might set it up to listen on all interfaces.

heapdavid commented 3 years ago

Thanks for looking into this - as the ingress is also in the overall /17 we've decided to keep specifying the /24 range for now to keep configuration access locked down.

chrisbecke commented 1 year ago

hmmm. I have several swarms that I deploy stacks to, with different overlay address pools. Its not clear that --caddy-controller-network can be omitted, and I don't like declaring overlay networks with explicit cidrs as libnetowrk is stupid and will allocate an overlapping network if it ever gets to that range.

If not a network name, what about a DNSRR name. e.g. tasks.controller, and rather than the docker api, server instances can exactly determine the allowed controller ips via a dns query?

Assuming that the servers are simply binding to the provided address, I can see that this won't work, but...

lucaslorentz commented 1 year ago

@chrisbecke DNS might work, but we would need to test if the controller DNS name is reachable from each network interface to determine which one is the right one.

I'm considering making controller networks optional at https://github.com/lucaslorentz/caddy-docker-proxy/pull/428 After that you would only need to configure networks if you're concerned about containers taking over servers via the ingress network.