tailscale / caddy-tailscale

A highly experimental exploration of integrating Tailscale and Caddy.
Apache License 2.0
394 stars 42 forks source link

Reload fails: "tsnet: listener already open for tailscale, :80" #6

Open iliana opened 1 year ago

iliana commented 1 year ago

The usual systemd unit for caddy includes an ExecReload= option:

ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile --force

When this is run on a caddy service that already connected to Tailscale, we get this error: loading config: loading new config: http app module: start: listening on tailscale/nitter:80: tsnet: listener already open for tailscale, :80

Dec 26 21:10:05 hydrangea systemd[1]: Reloading Caddy.
Dec 26 21:10:05 hydrangea caddy[2407]: {"level":"info","ts":1672089005.6320755,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":""}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"info","ts":1672089005.6348317,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"40046","headers":{"Accept-Encoding":["gzip"],"Content-Length":["203"],"Content-Type":["application/json"],"Origin":["http://localhost:2019"],"User-Agent":["Go-http-client/1.1"]}}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"info","ts":1672089005.6360798,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"warn","ts":1672089005.6363716,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":80}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"info","ts":1672089005.636495,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000354a10"}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"info","ts":1672089005.63655,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0xc000354a10"}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"error","ts":1672089005.6366794,"logger":"admin.api","msg":"request error","error":"loading config: loading new config: http app module: start: listening on tailscale/nitter:80: tsnet: listener already open for tailscale, :80","status_code":400}
Dec 26 21:10:05 hydrangea caddy[2407]: Error: sending configuration to instance: caddy responded with error: HTTP 400: {"error":"loading config: loading new config: http app module: start: listening on tailscale/nitter:80: tsnet: listener already open for tailscale, :80"}
Dec 26 21:10:05 hydrangea caddy[2031]: {"level":"info","ts":1672089005.6377842,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
Dec 26 21:10:05 hydrangea systemd[1]: caddy.service: Control process exited, code=exited, status=1/FAILURE
Dec 26 21:11:35 hydrangea systemd[1]: caddy.service: Reload operation timed out. Killing reload process.

Restarting the service works fine, as expected.

Full Caddyfile (most of this is boilerplate from my usual template): ``` { admin localhost:2019 email iliana@buttslol.net } (global) { encode zstd gzip handle_errors { respond "{http.error.status_code} {http.error.status_text}" } header cache-control "public, max-age=0, must-revalidate" log { output file /var/log/caddy/access.log } } :80 { bind tailscale/nitter respond / "butts" } ```
AstraLuma commented 10 months ago

I suspect this will interfere with plugins like https://github.com/lucaslorentz/caddy-docker-proxy which dynamically generate config.

willnorris commented 4 months ago

I'm not able to reproduce this now, so I'm fairly certain it got fixed in #30, which added proper listener shutdown. Could you try again with the latest version of the plugin, and see if you still run into this behavior?

iliana commented 4 months ago

I don't have anything presently using this plugin so I can only agree with "likely fixed".

remos commented 3 months ago

I've just run into this myself, using caddy-docker-proxy, as AstraLuma mentioned

Dockerfile ```Dockerfile ARG CADDY_VERSION=2.8 FROM caddy:${CADDY_VERSION}-builder AS builder RUN xcaddy build \ --with github.com/lucaslorentz/caddy-docker-proxy/v2 \ --with github.com/tailscale/caddy-tailscale FROM caddy:${CADDY_VERSION}-alpine COPY --from=builder /usr/bin/caddy /usr/bin/caddy CMD ["caddy", "docker-proxy"] ```
Base Caddyfile (read by caddy-docker-proxy and merged with config from docker labels) ```Caddyfile { acme_dns cloudflare {$CLOUDFLARE_DNS_API_TOKEN} acme_ca https://acme-staging-v02.api.letsencrypt.org/directory tailscale { auth_key {$TS_AUTHKEY} state_dir /data/tailscale } } (internal) { abort } http://*.example.com:80 https://*.example.com:443 { import internal bind tailscale/caddy bind 127.0.0.1 } ```
Rendered/merged CaddyFile ```Caddyfile { acme_dns cloudflare {$CLOUDFLARE_DNS_API_TOKEN} acme_ca https://acme-staging-v02.api.letsencrypt.org/directory tailscale { auth_key {$TS_AUTHKEY} state_dir /data/tailscale } } (internal) { @test { host test.example.com } @test2 { host test.example.com } abort route @test { reverse_proxy 172.18.0.33:8000 } route @test2 { reverse_proxy 172.18.0.34:8000 } } http://*.example.com:80 https://*.example.com:443 { import internal bind tailscale/caddy 127.0.0.1 } ```
barebones docker-compose.yml ```yaml services: test: container_name: test image: crccheck/hello-world labels: caddy: (internal) caddy.@test.host: test.example.com caddy.route: "@test" caddy.route.reverse_proxy: "{{upstreams 8000}}" test2: container_name: test2 image: crccheck/hello-world labels: caddy: (internal) caddy.@test2.host: test2.example.com caddy.route: "@test2" caddy.route.reverse_proxy: "{{upstreams 8000}}" caddy: container_name: caddy build: ./caddy/ restart: unless-stopped ports: - "6080:80" - "6443:443" environment: - CLOUDFLARE_DNS_API_TOKEN=${CLOUDFLARE_DNS_API_TOKEN} - CADDY_DOCKER_CADDYFILE_PATH=/etc/caddy/Caddyfile - CADDY_DOCKER_POLLING_INTERVAL=2s - CADDY_DOCKER_PROCESS_CADDYFILE=false - TS_AUTHKEY=${TS_CADDY_PROXY_AUTHKEY} volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - ./network/caddy/Caddyfile:/etc/caddy/Caddyfile - caddydata:/data volumes: caddydata: ```
Error log entry ``` caddy | {"level":"error","ts":1720330017.807547,"logger":"admin.api","msg":"request error","error":"loading config: loading new config: http app module: start: listening on tailscale/caddy:443: tsnet: listener already open for tcp, :443","status_code":400} caddy | {"level":"error","ts":1720330017.8076308,"logger":"docker-proxy","msg":"Error response from server","server":"localhost","status code":400,"body":"{\"error\":\"loading config: loading new config: http app module: start: listening on tailscale/caddy:443: tsnet: listener already open for tcp, :443\"}\n"} ```

After updating the base Caddyfile, it will begin to fail On a fresh restart it does work just fine, similar to what iliana mentioned

I wonder if it could be due to what looks like caddy starting new apps before it stops old ones?: https://github.com/caddyserver/caddy/blob/4ef360745dab1023a7d4c04aebca3d05499dd5e1/caddy.go#L342-L355

remos commented 3 months ago

So I've tried out a horrible proof of concept, basically just lifting caddy's listenReusable (the non-unix version) to wrap the Listeners + PacketConns from tsnet and it seems to resolve the issue.

Though I also haven't checked out what run-on impacts there are for that