tailscale / caddy-tailscale

A highly experimental exploration of integrating Tailscale and Caddy.
Apache License 2.0
407 stars 44 forks source link

Failed to connect to local Tailscale daemon - /var/run/tailscale/tailscaled.sock: connect: no such file or directory #68

Closed kdevan closed 4 months ago

kdevan commented 4 months ago

I feel like I'm missing something obvious here. I know when I manually install the tailscale client (as opposed to using the caddy-tailscale module) that /var/run/tailscale/tailscaled.sock is the correct path. I see when I ssh into the server that the /var/run/tailscale directory does not exist. But the Caddy logs seem to show the server connecting to the tailscale client. Is there a step I'm missing that I should be doing to set this up correctly?

The Tailscale logs when Caddy boots up:

{"level":"info","ts":1718132344.4524896,"logger":"tailscale","msg":"tsnet running state path /data/tailscale/my-node/tailscaled.state"}
{"level":"info","ts":1718132344.453898,"logger":"tailscale","msg":"tsnet starting with hostname \"my-node\", varRoot \"/data/tailscale/my-node\""}
{"level":"info","ts":1718132345.4562378,"logger":"tailscale","msg":"LocalBackend state is NeedsLogin; running StartLoginInteractive..."}
{"level":"info","ts":1718132350.4583023,"logger":"tailscale","msg":"AuthLoop: state is Running; done"}

foo.bar.com has A and AAAA records with private IP values from the Tailscale device my-node. The error after making a request in browser to http://foo.bar.com:

{
  "level": "error",
  "ts": 1718132857.5938432,
  "logger": "http.handlers.authentication",
  "msg": "auth provider returned error",
  "provider": "tailscale",
  "error": "Failed to connect to local Tailscale daemon for /localapi/v0/whois; not running? Error: dial unix /var/run/tailscale/tailscaled.sock: connect: no such file or directory"
}

Request which shows up in logs right after the error:

{
  "level": "info",
  "ts": 1718150968.8068163,
  "logger": "http.log.access.tailscale",
  "msg": "handled request",
  "request": {
    "remote_ip": "100.106.231.73",
    "remote_port": "60590",
    "client_ip": "100.106.231.73",
    "proto": "HTTP/1.1",
    "method": "GET",
    "host": "foo.bar.com",
    "uri": "/",
    "headers": {
      "Accept": [
        "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"
      ],
      "Accept-Encoding": [
        "gzip, deflate"
      ],
      "Accept-Language": [
        "en-US,en;q=0.9"
      ],
      "Connection": [
        "keep-alive"
      ],
      "Dnt": [
        "1"
      ],
      "Upgrade-Insecure-Requests": [
        "1"
      ],
      "User-Agent": [
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36"
      ]
    }
  },
  "bytes_read": 0,
  "user_id": "",
  "duration": 0.001536612,
  "size": 0,
  "status": 401,
  "resp_headers": {
    "Server": [
      "Caddy"
    ]
  }
}

Dockerfile:

RUN /app/caddy/xcaddy build v2.8.4 \
    --with github.com/tailscale/caddy-tailscale \
    --with github.com/caddy-dns/route53 \
    --output /usr/bin/caddy

RUN chmod +x /usr/bin/caddy

CMD ["caddy", "run", "--config", "/config/caddy/caddy.json"]

Caddyfile:

{
  debug
  auto_https off
  tailscale {
    ephemeral
    auth_key "{env.TS_AUTHKEY}"
    state_dir "/data/tailscale"
  }
}

foo.bar.com:80 {
  bind tailscale/my-node
  tailscale_auth

  templates
  respond `Hello, {placeholder "http.auth.user.id"}}`
}

Caddy json config:

{
  "logging": {
    "logs": {
      "default": {
        "level": "DEBUG"
      }
    }
  },
  "apps": {
    "http": {
      "servers": {
        "srv0": {
          "listen": [
            "tailscale/my-node:80"
          ],
          "routes": [
            {
              "match": [
                {
                  "host": [
                    "foo.bar.com"
                  ]
                }
              ],
              "handle": [
                {
                  "handler": "subroute",
                  "routes": [
                    {
                      "handle": [
                        {
                          "handler": "authentication",
                          "providers": {
                            "tailscale": {}
                          }
                        },
                        {
                          "handler": "templates"
                        },
                        {
                          "body": "Hello, {placeholder \"http.auth.user.id\"}}",
                          "handler": "static_response"
                        }
                      ]
                    }
                  ]
                }
              ],
              "terminal": true
            }
          ],
          "automatic_https": {
            "disable": true
          }
        }
      }
    },
    "tailscale": {
      "auth_key": "{env.TS_AUTHKEY}",
      "ephemeral": true,
      "state_dir": "/data/tailscale"
    }
  }
}
kdevan commented 4 months ago

Ah I see the relevant part here. I need to figure out why it's defaulting to the local client since the request is being made through a tsnet listener. Going to experiment a bit.

kdevan commented 4 months ago

Updating with some debugging.

I found that when I log the server variable the listener shows up:

{
    "listen": [
        "tailscale/my-node:80"
    ],
    "idle_timeout": 300000000000,
    "routes": [
        {
        "terminal": true
        }
    ],
    "automatic_https": {
        "disable": true
    },
    "client_ip_headers": [
        "X-Forwarded-For"
    ],
    "protocols": [
        "h1",
        "h2",
        "h3"
    ]
}

And logging the server.Listeners() function results in an empty object returns the listener.

[
    {
        "Listener": {}
    }
]

Edit: I see when I add a second listener the server.Listeners() does return two Listener objects so it is returning the listeners. For now I've installed a local client and with that the fallback is working. Really curious why I can't get the client directly from the tsnet Server though.