Closed saeidscorp closed 1 year ago
I'd be surprised if there is an interaction with containerd
in this first step of connecting, though thank you for mentioning it.
Are you running Docker Desktop on Windows, macOS, Linux, or something else? Can I ask what OS version as well?
Thanks for the quick response.
Yeah sure:
Windows 11 Home Edition - Version 22H2 (OS Build 22621.819) (Meaning I only have WSL 2 backend for Docker)
Having the same problem. Can't use Tailscale extension after reinstalling Windows.(11 Pro)
@mtsbh are you also using Docker with WSL2?
I've just tested the latest version of Docker and the Tailscale extension on macOS without issue, so I suspect I'll need to spin up a VM to track this down, and want to make sure I'm properly replicating the environment where we're seeing this.
Yes. I'm using the last version of Docker-desktop on Windows 11 Pro (v 22H2 OS Build 22623.1020) with WSL2 and I don't have any windows containers running. Only Docker Desktop on Ubuntu - WSL 2. Not being able to log in to the Tailscale extension is stopping me from accessing the docker servers via Android.
I'm having the same problem: Windows 11 Pro latest fresh install, latest tailscale, latest docker desktop, containerd feature not enable. Can't login with browser or other device options.
this is the docker extension container log output:
2022-12-21 21:14:47 2022/12/21 19:14:47 logtail started
2022-12-21 21:14:47 2022/12/21 19:14:47 Program starting: v1.34.0-tbb6e746f3-g8d1edab6f, Go 1.19.2-ts3fd24dee31: []string{"/app/tailscaled", "--state=/var/lib/tailscale/tailscaled.state", "--tun=userspace-networking"}
2022-12-21 21:14:47 2022/12/21 19:14:47 LogID: 4ce9651daa3e0e2e063779d4b7186ee8ed2233198e579bcb9b0d65a4b7249fdd
2022-12-21 21:14:47 2022/12/21 19:14:47 logpolicy: using system state directory "/var/lib/tailscale"
2022-12-21 21:14:47 2022/12/21 19:14:47 wgengine.NewUserspaceEngine(tun "userspace-networking") ...
2022-12-21 21:14:47 2022/12/21 19:14:47 dns: using dns.noopManager
2022-12-21 21:14:47 2022/12/21 19:14:47 link state: interfaces.State{defaultRoute=eth0 ifs={br-f1b747c8cb5c:[172.18.0.1/16] docker0:[172.17.0.1/16] eth0:[192.168.65.3/25] services1:[192.168.65.4/32]} v4=true v6=false}
2022-12-21 21:14:47 2022/12/21 19:14:47 magicsock: disco key = d:2c747cd1fb704b23
2022-12-21 21:14:47 2022/12/21 19:14:47 Creating WireGuard device...
2022-12-21 21:14:47 2022/12/21 19:14:47 Bringing WireGuard device up...
2022-12-21 21:14:47 2022/12/21 19:14:47 Bringing router up...
2022-12-21 21:14:47 2022/12/21 19:14:47 Clearing router settings...
2022-12-21 21:14:47 2022/12/21 19:14:47 Starting link monitor...
2022-12-21 21:14:47 2022/12/21 19:14:47 Engine created.
2022-12-21 21:14:47 2022/12/21 19:14:47 pm: migrating "_daemon" profile to new format
2022-12-21 21:14:47 2022/12/21 19:14:47 got LocalBackend in 5ms
2022-12-21 21:14:47 2022/12/21 19:14:47 Start
2022-12-21 21:14:47 2022/12/21 19:14:47 Backend: logs: be:4ce9651daa3e0e2e063779d4b7186ee8ed2233198e579bcb9b0d65a4b7249fdd fe:
2022-12-21 21:14:47 2022/12/21 19:14:47 Switching ipn state NoState -> NeedsLogin (WantRunning=false, nm=false)
2022-12-21 21:14:47 2022/12/21 19:14:47 blockEngineUpdates(true)
2022-12-21 21:14:47 2022/12/21 19:14:47 health("overall"): error: state=NeedsLogin, wantRunning=false
2022-12-21 21:14:47 2022/12/21 19:14:47 wgengine: Reconfig: configuring userspace WireGuard config (with 0/0 peers)
2022-12-21 21:14:47 2022/12/21 19:14:47 wgengine: Reconfig: configuring router
2022-12-21 21:14:47 2022/12/21 19:14:47 wgengine: Reconfig: configuring DNS
2022-12-21 21:14:47 2022/12/21 19:14:47 dns: Set: {DefaultResolvers:[] Routes:{} SearchDomains:[] Hosts:0}
2022-12-21 21:14:47 2022/12/21 19:14:47 dns: Resolvercfg: {Routes:{} Hosts:0 LocalDomains:[]}
2022-12-21 21:14:47 2022/12/21 19:14:47 dns: OScfg: {Nameservers:[] SearchDomains:[] MatchDomains:[] Hosts:[]}
2022-12-21 21:14:57 2022/12/21 19:14:57 Start
2022-12-21 21:14:57 2022/12/21 19:14:57 control: client.Shutdown()
2022-12-21 21:14:57 2022/12/21 19:14:57 control: client.Shutdown: inSendStatus=0
2022-12-21 21:14:57 2022/12/21 19:14:57 Backend: logs: be:4ce9651daa3e0e2e063779d4b7186ee8ed2233198e579bcb9b0d65a4b7249fdd fe:
2022-12-21 21:14:57 2022/12/21 19:14:57 control: mapRoutine: quit
2022-12-21 21:14:57 2022/12/21 19:14:57 control: Client.Shutdown done.
2022-12-21 21:14:57 2022/12/21 19:14:57 Switching ipn state NoState -> NeedsLogin (WantRunning=true, nm=false)
2022-12-21 21:14:57 2022/12/21 19:14:57 blockEngineUpdates(true)
2022-12-21 21:14:57 2022/12/21 19:14:57 StartLoginInteractive: url=false
2022-12-21 21:14:57 2022/12/21 19:14:57 control: client.Login(false, 2)
2022-12-21 21:14:57 2022/12/21 19:14:57 control: LoginInteractive -> regen=true
2022-12-21 21:14:57 2022/12/21 19:14:57 control: doLogin(regen=true, hasUrl=false)
2022-12-21 21:14:57 2022/12/21 19:14:57 control: control server key from https://controlplane.tailscale.com: ts2021=[fSeS+], legacy=[nlFWp]
2022-12-21 21:14:57 2022/12/21 19:14:57 control: Generating a new nodekey.
2022-12-21 21:14:57 2022/12/21 19:14:57 control: RegisterReq: onode= node=[KnwPp] fup=false nks=false
2022-12-21 21:14:57 2022/12/21 19:14:57 control: creating new noise client
2022-12-21 21:15:37 2022/12/21 19:15:37 Start
2022-12-21 21:15:37 2022/12/21 19:15:37 Backend: logs: be:4ce9651daa3e0e2e063779d4b7186ee8ed2233198e579bcb9b0d65a4b7249fdd fe:
2022-12-21 21:15:37 2022/12/21 19:15:37 control: client.Shutdown()
2022-12-21 21:15:37 2022/12/21 19:15:37 control: client.Shutdown: inSendStatus=0
2022-12-21 21:15:37 2022/12/21 19:15:37 Switching ipn state NoState -> NeedsLogin (WantRunning=true, nm=false)
2022-12-21 21:15:37 2022/12/21 19:15:37 blockEngineUpdates(true)
2022-12-21 21:15:37 2022/12/21 19:15:37 control: mapRoutine: quit
2022-12-21 21:15:37 2022/12/21 19:15:37 control: Client.Shutdown done.
2022-12-21 21:15:37 2022/12/21 19:15:37 StartLoginInteractive: url=false
2022-12-21 21:15:37 2022/12/21 19:15:37 control: client.Login(false, 2)
2022-12-21 21:15:37 2022/12/21 19:15:37 control: LoginInteractive -> regen=true
2022-12-21 21:15:37 2022/12/21 19:15:37 control: doLogin(regen=true, hasUrl=false)
2022-12-21 21:15:37 2022/12/21 19:15:37 control: control server key from https://controlplane.tailscale.com: ts2021=[fSeS+], legacy=[nlFWp]
2022-12-21 21:15:37 2022/12/21 19:15:37 control: Generating a new nodekey.
2022-12-21 21:15:37 2022/12/21 19:15:37 control: RegisterReq: onode= node=[dViHE] fup=false nks=false
2022-12-21 21:15:37 2022/12/21 19:15:37 control: creating new noise client
2022-12-21 21:17:16 2022/12/21 19:17:16 Start
2022-12-21 21:17:16 2022/12/21 19:17:16 control: client.Shutdown()
2022-12-21 21:17:16 2022/12/21 19:17:16 control: client.Shutdown: inSendStatus=0
2022-12-21 21:17:16 2022/12/21 19:17:16 Backend: logs: be:4ce9651daa3e0e2e063779d4b7186ee8ed2233198e579bcb9b0d65a4b7249fdd fe:
2022-12-21 21:17:16 2022/12/21 19:17:16 Switching ipn state NoState -> NeedsLogin (WantRunning=true, nm=false)
2022-12-21 21:17:16 2022/12/21 19:17:16 blockEngineUpdates(true)
2022-12-21 21:17:16 2022/12/21 19:17:16 control: mapRoutine: quit
2022-12-21 21:17:16 2022/12/21 19:17:16 control: Client.Shutdown done.
2022-12-21 21:17:16 2022/12/21 19:17:16 StartLoginInteractive: url=false
2022-12-21 21:17:16 2022/12/21 19:17:16 control: client.Login(false, 2)
2022-12-21 21:17:16 2022/12/21 19:17:16 control: LoginInteractive -> regen=true
2022-12-21 21:17:16 2022/12/21 19:17:16 control: doLogin(regen=true, hasUrl=false)
2022-12-21 21:17:19 2022/12/21 19:17:19 control: control server key from https://controlplane.tailscale.com: ts2021=[fSeS+], legacy=[nlFWp]
2022-12-21 21:17:19 2022/12/21 19:17:19 control: Generating a new nodekey.
2022-12-21 21:17:19 2022/12/21 19:17:19 control: RegisterReq: onode= node=[MjReB] fup=false nks=false
2022-12-21 21:17:19 2022/12/21 19:17:19 control: creating new noise client
last part (at 19:17) is from clicking "log in from another device" before that i clicked "log in with browser"
Maybe it doesn't know the authurl https://github.com/tailscale/docker-extension/blob/main/ui/src/tailscale.ts#L426?
I'm inclined to build it locally and see how the react behaves exactly, but maybe someone knows faster. two apparently different methods that misbehave (handleLogInQRClick and handleLogInClick)... https://github.com/tailscale/docker-extension/blob/main/ui/src/views/onboarding-view.tsx#L35
I've enabled debugging for the extension and Chrome Dev Tools pops up but there's nothing happening in the console when clicking the buttons doesn't do the expected login action.
I'm having the same issue, Tailscale docker container doesn't work for me either. My log looks the same as @manzu. It also hangs at control: creating new noise client
System info: Docker Desktop for Windows v4.15.0 (WSL2) Docker Engine v20.10.21 Edition Windows 11 Pro Version 22H2 OS build 22621.1105 Experience Windows Feature Experience Pack 1000.22638.1000.0 Processor Intel(R) Core(TM) i5-8500T CPU @ 2.10GHz 2.11 GHz
Confirming this issue. Running Docker Docker Desktop (4.16.1 (95567) ) on Windows 11 (Version 22H2, OS Build 22623, latest updates) and WSL2/Ubuntu
Login buttons on the Tailescale extension does not open browser/QR to enable login.
Using Docker Desktop 4.16.1 using WSL2 as its backend, and clicking on the button to log in. We see the tailscale up
process running:
root 47 0.0 0.6 720876 12004 ? Sl 15:24 0:00 /app/tailscale up --hostname=dgentry-win11-docker-desktop --accept-dns=false --json --reset --force-reauth
There has been no output:
# ls -l /tmp/background-output
-rw-r--r-- 1 root root 0 Jan 16 15:24 /tmp/background-output
If I run tailscale up
in a shell, it also hangs with no output. /app/tailscale
does seem to work overall though, it is able to print its own status and to communicate with tailscaled that the user is not logged in:
# /app/tailscale up --hostname=dgentry-win11-docker-desktop --accept-dns=false --json
^C
# /app/tailscale version
1.34.0
tailscale commit: bb6e746f335b45c0b103345c88a8ba9e0fba0e1a
other commit: 8d1edab6f4eb0c28a9e64f7919b7e19a24beaa57
go version: go1.19.2-ts3fd24dee31
# /app/tailscale status
Logged out.
# ls -l /var/run/tailscale
total 0
srw-rw-rw- 1 root root 0 Jan 16 15:24 tailscaled.sock
#
Running tailscale up
does prompt tailscaled
to start a login attempt. The first exchange with the coordination server works, it is able to retrieve the control server key. Then it just stops after creating the noise client.
2023-01-16 15:49:40.4355912 +0000 UTC: Switching ipn state NoState -> NeedsLogin (WantRunning=true, nm=false)
2023-01-16 15:49:40.4361103 +0000 UTC: blockEngineUpdates(true)
2023-01-16 15:49:40.4386546 +0000 UTC: wgengine: Reconfig done
2023-01-16 15:49:40.4427612 +0000 UTC: control: Client.Shutdown done.
2023-01-16 15:49:40.4564441 +0000 UTC: StartLoginInteractive: url=false
2023-01-16 15:49:40.4575104 +0000 UTC: control: client.Login(false, 2)
2023-01-16 15:49:40.4588782 +0000 UTC: control: authRoutine: context done.
2023-01-16 15:49:40.4592492 +0000 UTC: control: authRoutine: state:new; wantLoggedIn=true
2023-01-16 15:49:40.4599385 +0000 UTC: control: direct.TryLogin(token=false, flags=2)
2023-01-16 15:49:40.4618396 +0000 UTC: control: LoginInteractive -> regen=true
2023-01-16 15:49:40.4629941 +0000 UTC: control: doLogin(regen=true, hasUrl=false)
2023-01-16 15:49:41.2148872 +0000 UTC: control: control server key from https://controlplane.tailscale.com: ts2021=[fSeS+], legacy=[nlFWp]
2023-01-16 15:49:41.2162233 +0000 UTC: control: Generating a new nodekey.
2023-01-16 15:49:41.220452 +0000 UTC: control: RegisterReq: onode= node=[gkQtq] fup=false nks=false
2023-01-16 15:49:41.2288314 +0000 UTC: control: creating new noise client
Even after tailscaled seems to hang, the container can connect to controlplane.tailscale.com
:
# openssl s_client -connect controlplane.tailscale.com:443
CONNECTED(00000003)
depth=2 C = US, O = Internet Security Research Group, CN = ISRG Root X1
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = R3
verify return:1
depth=0 CN = controlplane.tailscale.com
Starting a second tailscaled
in userspace-networking mode behaves the same way:
# /app/tailscaled --state=/tmp/tailscaled2.state --socket=/tmp/tailscale1.sock --tun=userspace-networking
...
Switching ipn state NoState -> NeedsLogin (WantRunning=true, nm=false)
blockEngineUpdates(true)
StartLoginInteractive: url=false
control: client.Login(false, 2)
control: LoginInteractive -> regen=true
control: doLogin(regen=true, hasUrl=false)
control: control server key from https://controlplane.tailscale.com: ts2021=[fSeS+], legacy=[nlFWp]
control: Generating a new nodekey.
control: RegisterReq: onode= node=[VRDTD] fup=false nks=false
control: creating new noise client
(In another shell I used: # /app/tailscale --socket=/tmp/tailscale1.sock up
)
We would expect tailscaled
to proceed like so:
control: RegisterReq: onode= node=[NdUVX] fup=false nks=false
control: creating new noise client
control: RegisterReq: got response; nodeKeyExpired=false, machineAuthorized=false; authURL=true
control: AuthURL is https://login.tailscale.com/a/0123456789abcdef
Looking into the connection to the controlplane, /app/tailscale debug ts2021
does succeed. IPv6 is broken for the Windows system I'm running this on, but IPv4 works.
# /app/tailscale debug ts2021
17:46:35.857194 Fetching keys from https://controlplane.tailscale.com/key?v=51 ...
17:46:45.754199 Dial("tcp", "[2a05:d014:386:202:a3e1:5be6:cbb1:1ac9]:80") ...
17:46:45.765147 Dial("tcp", "[2a05:d014:386:202:a3e1:5be6:cbb1:1ac9]:80") = dial tcp [2a05:d014:386:202:a3e1:5be6:cbb1:1ac9]:80: connect: network is unreachable
17:46:45.779545 Dial("tcp", "52.29.8.43:80") ...
17:46:45.836495 Dial("tcp", "52.29.8.43:80") = 192.168.65.4:48012 / 52.29.8.43:80
17:46:46.071881 Dial("tcp", "[2a05:d014:386:202:a3e1:5be6:cbb1:1ac9]:443") ...
17:46:46.085325 Dial("tcp", "[2a05:d014:386:202:a3e1:5be6:cbb1:1ac9]:443") = dial tcp [2a05:d014:386:202:a3e1:5be6:cbb1:1ac9]:443: connect: network is unreachable
17:46:46.088986 Dial("tcp", "52.29.8.43:443") ...
17:46:46.181944 Dial("tcp", "52.29.8.43:443") = 192.168.65.4:37566 / 52.29.8.43:443
17:46:46.241469 controlhttp.Dial = 0xc00020e010, <nil>
17:46:46.253457 did noise handshake
17:46:46.254847 final underlying conn: 192.168.65.4:48012 / 52.29.8.43:80
/app/tailscale debug daemon-goroutines
after issuing /app/tailscale up
, when the daemon seems to get stuck and make no progress:
# /app/tailscale debug daemon-goroutines
goroutine 925 [running]:
tailscale.com/ipn/localapi.(*Handler).serveGoroutines(0xc0000a6a54?, {0xfbdfb0, 0xc000650620}, 0x16?)
tailscale.com@v1.34.0/ipn/localapi/localapi.go:413 +0x65
tailscale.com/ipn/localapi.(*Handler).ServeHTTP(0xc000096f80, {0xfbdfb0, 0xc000650620}, 0xc00021a900)
tailscale.com@v1.34.0/ipn/localapi/localapi.go:168 +0x472
tailscale.com/ipn/ipnserver.(*Server).serveHTTP(0xc000214000, {0xfbdfb0?, 0xc000650620}, 0xc00021a900)
tailscale.com@v1.34.0/ipn/ipnserver/server.go:203 +0x5f2
net/http.HandlerFunc.ServeHTTP(0x0?, {0xfbdfb0?, 0xc000650620?}, 0x46972e?)
net/http/server.go:2109 +0x2f
net/http.serverHandler.ServeHTTP({0xc00031d890?}, {0xfbdfb0, 0xc000650620}, 0xc00021a900)
net/http/server.go:2947 +0x30c
net/http.(*conn).serve(0xc00022a6e0, {0xfbf020, 0xc00031d830})
net/http/server.go:1991 +0x607
created by net/http.(*Server).Serve
net/http/server.go:3102 +0x4db
goroutine 1 [IO wait]:
internal/poll.runtime_pollWait(0x7fe83c43aff0, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc0000d0180?, 0x0?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0000d0180)
internal/poll/fd_unix.go:614 +0x234
net.(*netFD).accept(0xc0000d0180)
net/fd_unix.go:172 +0x35
net.(*UnixListener).accept(0x46972e?)
net/unixsock_posix.go:166 +0x1c
net.(*UnixListener).Accept(0xc00009c150)
net/unixsock.go:260 +0x3d
net/http.(*Server).Serve(0xc000216000, {0xfbde00, 0xc00009c150})
net/http/server.go:3070 +0x385
tailscale.com/ipn/ipnserver.(*Server).Run(0xc000214000, {0xfbef78?, 0xc000096240}, {0xfbde00?, 0xc00009c150})
tailscale.com@v1.34.0/ipn/ipnserver/server.go:515 +0x3f6
main.startIPNServer({0xfbefb0, 0xc00003a030}, 0xc0000a40a0, {0xc0000ae180, 0x40})
tailscale.com@v1.34.0/cmd/tailscaled/tailscaled.go:439 +0x448
main.run()
tailscale.com@v1.34.0/cmd/tailscaled/tailscaled.go:382 +0x2ac
main.main()
tailscale.com@v1.34.0/cmd/tailscaled/tailscaled.go:235 +0x92a
goroutine 6 [select]:
tailscale.com/logtail.(*Logger).drainBlock(...)
tailscale.com@v1.34.0/logtail/logtail.go:270
tailscale.com/logtail.(*Logger).drainPending(0xc000170700, {0xc0000c8000?, 0xc0001ea440?, 0x0?})
tailscale.com@v1.34.0/logtail/logtail.go:300 +0x1b3
tailscale.com/logtail.(*Logger).uploading(0xc000170700, {0xfbef78, 0xc0001ea440})
tailscale.com@v1.34.0/logtail/logtail.go:344 +0xcf
created by tailscale.com/logtail.NewLogger
tailscale.com@v1.34.0/logtail/logtail.go:166 +0x9d3
goroutine 33 [syscall, 14 minutes]:
os/signal.signal_recv()
runtime/sigqueue.go:152 +0x2f
os/signal.loop()
os/signal/signal_unix.go:23 +0x19
created by os/signal.Notify.func1.1
os/signal/signal.go:151 +0x2a
goroutine 18 [select, 14 minutes]:
main.startIPNServer.func1()
tailscale.com@v1.34.0/cmd/tailscaled/tailscaled.go:402 +0xa5
created by main.startIPNServer
tailscale.com@v1.34.0/cmd/tailscaled/tailscaled.go:401 +0x28a
goroutine 20 [select, 14 minutes]:
tailscale.com/ipn/ipnserver.(*Server).Run.func2()
tailscale.com@v1.34.0/ipn/ipnserver/server.go:485 +0x7b
created by tailscale.com/ipn/ipnserver.(*Server).Run
tailscale.com@v1.34.0/ipn/ipnserver/server.go:484 +0x17b
goroutine 49 [chan receive, 14 minutes]:
tailscale.com/net/tstun.(*fakeTUN).Read(0x0?, {0xfbef78?, 0x0?, 0x582679f9?}, 0xc0e9849eb72e67c4?)
tailscale.com@v1.34.0/net/tstun/fake.go:38 +0x1f
tailscale.com/net/tstun.(*Wrapper).poll(0xc00036e000)
tailscale.com@v1.34.0/net/tstun/wrap.go:396 +0x1fd
created by tailscale.com/net/tstun.wrap
tailscale.com@v1.34.0/net/tstun/wrap.go:217 +0x285
goroutine 50 [select, 14 minutes]:
tailscale.com/net/tstun.(*Wrapper).pumpEvents(0xc00036e000)
tailscale.com@v1.34.0/net/tstun/wrap.go:295 +0x11a
created by tailscale.com/net/tstun.wrap
tailscale.com@v1.34.0/net/tstun/wrap.go:218 +0x2c5
goroutine 37 [IO wait, 14 minutes]:
internal/poll.runtime_pollWait(0x7fe83c43ac30, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc0000d0580?, 0xc0002f6600?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).ReadFrom(0xc0000d0580, {0xc0002f6600, 0x5dc, 0x5dc})
internal/poll/fd_unix.go:223 +0x245
net.(*netFD).readFrom(0xc0000d0580, {0xc0002f6600?, 0x40d03f?, 0x0?})
net/fd_posix.go:61 +0x29
net.(*IPConn).readFrom(0xc0002f6600?, {0xc0002f6600, 0x5dc, 0x5dc})
net/iprawsock_posix.go:49 +0x32
net.(*IPConn).ReadFrom(0xc00009e060, {0xc0002f6600?, 0x0?, 0x0?})
net/iprawsock.go:129 +0x31
tailscale.com/wgengine/magicsock.(*Conn).receiveDisco(0xc000226000, {0xfc1d10, 0xc00009e060}, 0x0)
tailscale.com@v1.34.0/wgengine/magicsock/magicsock_linux.go:214 +0x85
created by tailscale.com/wgengine/magicsock.(*Conn).listenRawDisco
tailscale.com@v1.34.0/wgengine/magicsock/magicsock_linux.go:207 +0xa18
goroutine 38 [IO wait, 14 minutes]:
internal/poll.runtime_pollWait(0x7fe83c43a960, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc00016e000?, 0xc0002f7200?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).ReadFrom(0xc00016e000, {0xc0002f7200, 0x5dc, 0x5dc})
internal/poll/fd_unix.go:223 +0x245
net.(*netFD).readFrom(0xc00016e000, {0xc0002f7200?, 0x40d03f?, 0x0?})
net/fd_posix.go:61 +0x29
net.(*IPConn).readFrom(0xc0002f7200?, {0xc0002f7200, 0x5dc, 0x5dc})
net/iprawsock_posix.go:49 +0x32
net.(*IPConn).ReadFrom(0xc000012008, {0xc0002f7200?, 0x0?, 0x0?})
net/iprawsock.go:129 +0x31
tailscale.com/wgengine/magicsock.(*Conn).receiveDisco(0xc000226000, {0xfc1d10, 0xc000012008}, 0x1)
tailscale.com@v1.34.0/wgengine/magicsock/magicsock_linux.go:214 +0x85
created by tailscale.com/wgengine/magicsock.(*Conn).listenRawDisco
tailscale.com@v1.34.0/wgengine/magicsock/magicsock_linux.go:207 +0xa18
goroutine 39 [select, 14 minutes]:
golang.zx2c4.com/wireguard/ratelimiter.(*Ratelimiter).Init.func1()
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/ratelimiter/ratelimiter.go:68 +0x8e
created by golang.zx2c4.com/wireguard/ratelimiter.(*Ratelimiter).Init
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/ratelimiter/ratelimiter.go:64 +0x16a
goroutine 40 [semacquire, 14 minutes]:
sync.runtime_Semacquire(0x0?)
runtime/sema.go:62 +0x25
sync.(*WaitGroup).Wait(0x0?)
sync/waitgroup.go:139 +0x52
golang.zx2c4.com/wireguard/device.newHandshakeQueue.func1()
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/channels.go:68 +0x2b
created by golang.zx2c4.com/wireguard/device.newHandshakeQueue
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/channels.go:67 +0xad
goroutine 41 [semacquire, 14 minutes]:
sync.runtime_Semacquire(0x0?)
runtime/sema.go:62 +0x25
sync.(*WaitGroup).Wait(0x0?)
sync/waitgroup.go:139 +0x52
golang.zx2c4.com/wireguard/device.newOutboundQueue.func1()
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/channels.go:32 +0x2b
created by golang.zx2c4.com/wireguard/device.newOutboundQueue
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/channels.go:31 +0xad
goroutine 42 [semacquire, 14 minutes]:
sync.runtime_Semacquire(0x0?)
runtime/sema.go:62 +0x25
sync.(*WaitGroup).Wait(0x0?)
sync/waitgroup.go:139 +0x52
golang.zx2c4.com/wireguard/device.newInboundQueue.func1()
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/channels.go:50 +0x2b
created by golang.zx2c4.com/wireguard/device.newInboundQueue
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/channels.go:49 +0xad
goroutine 43 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineEncryption(0xc0001ec500, 0x0?)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/send.go:374 +0x1b4
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:312 +0x38b
goroutine 44 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineDecryption(0xc0001ec500, 0x0?)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/receive.go:210 +0x199
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:313 +0x3d3
goroutine 45 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineHandshake(0xc0001ec500, 0x1)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/receive.go:242 +0x194
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:314 +0x325
goroutine 46 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineEncryption(0xc0001ec500, 0x0?)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/send.go:374 +0x1b4
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:312 +0x38b
goroutine 47 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineDecryption(0xc0001ec500, 0x0?)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/receive.go:210 +0x199
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:313 +0x3d3
goroutine 48 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineHandshake(0xc0001ec500, 0x2)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/receive.go:242 +0x194
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:314 +0x325
goroutine 65 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineEncryption(0xc0001ec500, 0x0?)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/send.go:374 +0x1b4
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:312 +0x38b
goroutine 66 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineDecryption(0xc0001ec500, 0x0?)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/receive.go:210 +0x199
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:313 +0x3d3
goroutine 67 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineHandshake(0xc0001ec500, 0x3)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/receive.go:242 +0x194
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:314 +0x325
goroutine 68 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineEncryption(0xc0001ec500, 0x0?)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/send.go:374 +0x1b4
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:312 +0x38b
goroutine 69 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineDecryption(0xc0001ec500, 0x216e08070605?)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/receive.go:210 +0x199
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:313 +0x3d3
goroutine 70 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineHandshake(0xc0001ec500, 0x4)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/receive.go:242 +0x194
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:314 +0x325
goroutine 71 [chan receive, 14 minutes]:
tailscale.com/net/tstun.(*Wrapper).Read(0xc00036e000, {0xc00022e000, 0xffff, 0xffff}, 0x10)
tailscale.com@v1.34.0/net/tstun/wrap.go:518 +0xb1
golang.zx2c4.com/wireguard/device.(*Device).RoutineReadFromTUN(0xc0001ec500)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/send.go:227 +0xf9
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:319 +0x485
goroutine 72 [chan receive, 14 minutes]:
golang.zx2c4.com/wireguard/device.(*Device).RoutineTUNEventReader(0xc0001ec500)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/tun.go:19 +0x85
created by golang.zx2c4.com/wireguard/device.NewDevice
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:320 +0x4c5
goroutine 73 [chan receive, 14 minutes]:
tailscale.com/wgengine.NewUserspaceEngine.func6()
tailscale.com@v1.34.0/wgengine/userspace.go:432 +0x4d
created by tailscale.com/wgengine.NewUserspaceEngine
tailscale.com@v1.34.0/wgengine/userspace.go:430 +0x1118
goroutine 74 [IO wait, 7 minutes]:
internal/poll.runtime_pollWait(0x7fe83c43ad20, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc0007b4200?, 0xc000240000?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).ReadFromInet4(0xc0007b4200, {0xc000240000, 0xffff, 0xffff}, 0xc000044a00?)
internal/poll/fd_unix.go:250 +0x1e5
net.(*netFD).readFromInet4(0xc0007b4200, {0xc000240000?, 0x0?, 0x2dc0?}, 0xb7?)
net/fd_posix.go:66 +0x29
net.(*UDPConn).readFromAddrPort(0xd73160?, {0xc000240000?, 0xc000226000?, 0xc000313d08?})
net/udpsock_posix.go:78 +0x145
net.(*UDPConn).ReadFromUDPAddrPort(0xc000354058, {0xc000240000?, 0x7fe83c935fff?, 0x40d2dd?})
net/udpsock.go:175 +0x31
tailscale.com/wgengine/magicsock.(*RebindingUDPConn).ReadFromNetaddr(0xc000226060, {0xc000240000, 0xffff, 0xffff})
tailscale.com@v1.34.0/wgengine/magicsock/magicsock.go:3065 +0xd8
tailscale.com/wgengine/magicsock.(*Conn).receiveIPv4(0xc000226000, {0xc000240000, 0xffff, 0xffff})
tailscale.com@v1.34.0/wgengine/magicsock/magicsock.go:1695 +0xfb
golang.zx2c4.com/wireguard/device.(*Device).RoutineReceiveIncoming(0xc0001ec500, 0xc0001a81e0)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/receive.go:92 +0x19a
created by golang.zx2c4.com/wireguard/device.(*Device).BindUpdate
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:511 +0x4c7
goroutine 75 [IO wait, 7 minutes]:
internal/poll.runtime_pollWait(0x7fe83c43a3c0, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc0007b4180?, 0xc0000e8000?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).ReadFromInet6(0xc0007b4180, {0xc0000e8000, 0xffff, 0xffff}, 0xc000044a00?)
internal/poll/fd_unix.go:277 +0x1e5
net.(*netFD).readFromInet6(0xc0007b4180, {0xc0000e8000?, 0x0?, 0x2f40?}, 0xbd?)
net/fd_posix.go:72 +0x29
net.(*UDPConn).readFromAddrPort(0xd73160?, {0xc0000e8000?, 0xc000226000?, 0xc0000c1d08?})
net/udpsock_posix.go:85 +0x87
net.(*UDPConn).ReadFromUDPAddrPort(0xc000354048, {0xc0000e8000?, 0x7fe83c935fff?, 0x40d2dd?})
net/udpsock.go:175 +0x31
tailscale.com/wgengine/magicsock.(*RebindingUDPConn).ReadFromNetaddr(0xc000226090, {0xc0000e8000, 0xffff, 0xffff})
tailscale.com@v1.34.0/wgengine/magicsock/magicsock.go:3065 +0xd8
tailscale.com/wgengine/magicsock.(*Conn).receiveIPv6(0xc000226000, {0xc0000e8000, 0xffff, 0xffff})
tailscale.com@v1.34.0/wgengine/magicsock/magicsock.go:1679 +0xfe
golang.zx2c4.com/wireguard/device.(*Device).RoutineReceiveIncoming(0xc0001ec500, 0xc0001a81f0)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/receive.go:92 +0x19a
created by golang.zx2c4.com/wireguard/device.(*Device).BindUpdate
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:511 +0x4c7
goroutine 76 [chan receive, 14 minutes]:
tailscale.com/wgengine/magicsock.(*connBind).receiveDERP(0xc0000ce1b0, {0xc000600000, 0xffff, 0xffff})
tailscale.com@v1.34.0/wgengine/magicsock/magicsock.go:1759 +0x11d
golang.zx2c4.com/wireguard/device.(*Device).RoutineReceiveIncoming(0xc0001ec500, 0xc0001a8200)
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/receive.go:92 +0x19a
created by golang.zx2c4.com/wireguard/device.(*Device).BindUpdate
golang.zx2c4.com/wireguard@v0.0.0-20220920152132-bb719d3a6e2c/device/device.go:511 +0x4c7
goroutine 22 [IO wait, 14 minutes]:
internal/poll.runtime_pollWait(0x7fe83c43af00, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc00030e120?, 0xe863a67f18?, 0x1)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).RawRead(0xc00030e120, 0xc0000aa040)
internal/poll/fd_unix.go:766 +0x145
os.(*rawConn).Read(0xc000354010, 0x404030000004601?)
os/rawconn.go:31 +0x56
github.com/mdlayher/socket.(*Conn).read(0xc00031c6c0, {0xe3e27b?, 0x1000?}, 0xc000582070)
github.com/mdlayher/socket@v0.2.3/conn.go:576 +0xea
github.com/mdlayher/socket.(*Conn).Recvmsg(0x464407070606?, {0xc0000c9000, 0x1000, 0x1000}, {0x0, 0x0, 0x0}, 0x2)
github.com/mdlayher/socket@v0.2.3/conn.go:497 +0x1ec
github.com/mdlayher/netlink.(*conn).Receive(0xc000354018)
github.com/mdlayher/netlink@v1.6.0/conn_linux.go:127 +0xa5
github.com/mdlayher/netlink.(*Conn).receive(0xc000334540)
github.com/mdlayher/netlink@v1.6.0/conn.go:275 +0x6d
github.com/mdlayher/netlink.(*Conn).lockedReceive(0xc000334540)
github.com/mdlayher/netlink@v1.6.0/conn.go:234 +0x33
github.com/mdlayher/netlink.(*Conn).Receive(0x0?)
github.com/mdlayher/netlink@v1.6.0/conn.go:227 +0x9e
tailscale.com/wgengine/monitor.(*nlConn).Receive(0xc00031c720)
tailscale.com@v1.34.0/wgengine/monitor/monitor_linux.go:73 +0x50
tailscale.com/wgengine/monitor.(*Mon).pump(0xc00031a000)
tailscale.com@v1.34.0/wgengine/monitor/monitor.go:252 +0x8a
created by tailscale.com/wgengine/monitor.(*Mon).Start
tailscale.com@v1.34.0/wgengine/monitor/monitor.go:192 +0x196
goroutine 23 [select, 7 minutes]:
tailscale.com/wgengine/monitor.(*Mon).debounce(0xc00031a000)
tailscale.com@v1.34.0/wgengine/monitor/monitor.go:293 +0xd8
created by tailscale.com/wgengine/monitor.(*Mon).Start
tailscale.com@v1.34.0/wgengine/monitor/monitor.go:193 +0x1d6
goroutine 24 [select, 14 minutes]:
tailscale.com/net/dns.(*Manager).NextPacket(0xc0000a4230)
tailscale.com@v1.34.0/net/dns/manager.go:357 +0xe5
tailscale.com/wgengine.(*userspaceEngine).pollResolver(0xc000580000)
tailscale.com@v1.34.0/wgengine/userspace.go:532 +0x30
created by tailscale.com/wgengine.NewUserspaceEngine
tailscale.com@v1.34.0/wgengine/userspace.go:465 +0x143d
goroutine 25 [select, 14 minutes]:
gvisor.dev/gvisor/pkg/sync.Gopark(...)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sync/runtime_unsafe.go:41
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).nextWaker(0xc00012afd8, 0x1, 0x1?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:209 +0x85
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).fetch(0x403020202020201?, 0xb7?, 0xd?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:256 +0x31
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).Fetch(...)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:279
gvisor.dev/gvisor/pkg/tcpip/transport/tcp.(*processor).start(0xc00012afc0, 0x111111100e0c0a07?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/tcpip/transport/tcp/dispatcher.go:287 +0xb0
created by gvisor.dev/gvisor/pkg/tcpip/transport/tcp.(*dispatcher).init
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/tcpip/transport/tcp/dispatcher.go:390 +0x13b
goroutine 26 [select, 14 minutes]:
gvisor.dev/gvisor/pkg/sync.Gopark(...)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sync/runtime_unsafe.go:41
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).nextWaker(0xc00012b068, 0x1, 0x3?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:209 +0x85
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).fetch(0x10d6?, 0x0?, 0x0?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:256 +0x31
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).Fetch(...)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:279
gvisor.dev/gvisor/pkg/tcpip/transport/tcp.(*processor).start(0xc00012b050, 0x10f203030302?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/tcpip/transport/tcp/dispatcher.go:287 +0xb0
created by gvisor.dev/gvisor/pkg/tcpip/transport/tcp.(*dispatcher).init
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/tcpip/transport/tcp/dispatcher.go:390 +0x13b
goroutine 27 [select, 14 minutes]:
gvisor.dev/gvisor/pkg/sync.Gopark(...)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sync/runtime_unsafe.go:41
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).nextWaker(0xc00012b0f8, 0x1, 0x13?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:209 +0x85
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).fetch(0x908080808060604?, 0x9?, 0x9?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:256 +0x31
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).Fetch(...)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:279
gvisor.dev/gvisor/pkg/tcpip/transport/tcp.(*processor).start(0xc00012b0e0, 0x302020201010000?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/tcpip/transport/tcp/dispatcher.go:287 +0xb0
created by gvisor.dev/gvisor/pkg/tcpip/transport/tcp.(*dispatcher).init
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/tcpip/transport/tcp/dispatcher.go:390 +0x13b
goroutine 28 [select, 14 minutes]:
gvisor.dev/gvisor/pkg/sync.Gopark(...)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sync/runtime_unsafe.go:41
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).nextWaker(0xc00012b188, 0x1, 0x3?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:209 +0x85
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).fetch(0x16dd00000000?, 0x0?, 0x1?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:256 +0x31
gvisor.dev/gvisor/pkg/sleep.(*Sleeper).Fetch(...)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/sleep/sleep_unsafe.go:279
gvisor.dev/gvisor/pkg/tcpip/transport/tcp.(*processor).start(0xc00012b170, 0x504040403030200?)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/tcpip/transport/tcp/dispatcher.go:287 +0xb0
created by gvisor.dev/gvisor/pkg/tcpip/transport/tcp.(*dispatcher).init
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/tcpip/transport/tcp/dispatcher.go:390 +0x13b
goroutine 31 [select, 14 minutes]:
gvisor.dev/gvisor/pkg/tcpip/link/channel.(*queue).ReadContext(...)
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/tcpip/link/channel/channel.go:71
gvisor.dev/gvisor/pkg/tcpip/link/channel.(*Endpoint).ReadContext(0xc0002dc4e0?, {0xfbef78?, 0xc0002da500?})
gvisor.dev/gvisor@v0.0.0-20220817001344-846276b3dbc5/pkg/tcpip/link/channel/channel.go:174 +0x89
tailscale.com/wgengine/netstack.(*Impl).inject(0xc0001e61e0)
tailscale.com@v1.34.0/wgengine/netstack/netstack.go:453 +0x38
created by tailscale.com/wgengine/netstack.(*Impl).Start
tailscale.com@v1.34.0/wgengine/netstack/netstack.go:246 +0x2b6
goroutine 81 [select]:
tailscale.com/portlist.(*Poller).runWithTickChan(0xc0002d6200, {0xfbef78, 0xc000096500}, 0xc000295080)
tailscale.com@v1.34.0/portlist/poller.go:156 +0x1e9
tailscale.com/portlist.(*Poller).Run(0xc0002dc540?, {0xfbef78, 0xc000096500})
tailscale.com@v1.34.0/portlist/poller.go:143 +0x8a
created by tailscale.com/ipn/ipnlocal.(*LocalBackend).Start.func1
tailscale.com@v1.34.0/ipn/ipnlocal/local.go:1247 +0xbd
goroutine 82 [chan receive, 7 minutes]:
tailscale.com/ipn/ipnlocal.(*LocalBackend).readPoller(0xc0005a0300)
tailscale.com@v1.34.0/ipn/ipnlocal/local.go:1683 +0x74
created by tailscale.com/ipn/ipnlocal.(*LocalBackend).Start.func1
tailscale.com@v1.34.0/ipn/ipnlocal/local.go:1248 +0xfa
goroutine 209 [select]:
tailscale.com/ipn/ipnlocal.(*LocalBackend).WatchNotifications(0xc0005a0300, {0xfbef78, 0xc0002dba00}, 0x0, 0xc0007578f8)
tailscale.com@v1.34.0/ipn/ipnlocal/local.go:1783 +0x48c
tailscale.com/ipn/localapi.(*Handler).serveWatchIPNBus(0xc0002dba40, {0xfbdfb0, 0xc0006509a0}, 0xc00021b500)
tailscale.com@v1.34.0/ipn/localapi/localapi.go:660 +0x215
tailscale.com/ipn/localapi.(*Handler).ServeHTTP(0xc0002dba40, {0xfbdfb0, 0xc0006509a0}, 0xc00021b500)
tailscale.com@v1.34.0/ipn/localapi/localapi.go:168 +0x472
tailscale.com/ipn/ipnserver.(*Server).serveHTTP(0xc000214000, {0xfbdfb0?, 0xc0006509a0}, 0xc00021b500)
tailscale.com@v1.34.0/ipn/ipnserver/server.go:203 +0x5f2
net/http.HandlerFunc.ServeHTTP(0xc0000a68ef?, {0xfbdfb0?, 0xc0006509a0?}, 0x46972e?)
net/http/server.go:2109 +0x2f
net/http.serverHandler.ServeHTTP({0xc00031db00?}, {0xfbdfb0, 0xc0006509a0}, 0xc00021b500)
net/http/server.go:2947 +0x30c
net/http.(*conn).serve(0xc00065c640, {0xfbf020, 0xc0000359b0})
net/http/server.go:1991 +0x607
created by net/http.(*Server).Serve
net/http/server.go:3102 +0x4db
goroutine 101 [IO wait, 13 minutes]:
internal/poll.runtime_pollWait(0x7fe83c43a4b0, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc000336880?, 0xc00031d421?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000336880, {0xc00031d421, 0x1, 0x1})
internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc000336880, {0xc00031d421?, 0xc0003580d8?, 0xc0000b7768?})
net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc00029a4e0, {0xc00031d421?, 0xc0008240f0?, 0xaaa565?})
net/net.go:183 +0x45
net/http.(*connReader).backgroundRead(0xc00031d410)
net/http/server.go:678 +0x3f
created by net/http.(*connReader).startBackgroundRead
net/http/server.go:674 +0xca
goroutine 131 [select]:
tailscale.com/ipn/ipnlocal.(*LocalBackend).WatchNotifications(0xc0005a0300, {0xfbef78, 0xc0003582c0}, 0x0, 0xc0007538f8)
tailscale.com@v1.34.0/ipn/ipnlocal/local.go:1783 +0x48c
tailscale.com/ipn/localapi.(*Handler).serveWatchIPNBus(0xc000358300, {0xfbdfb0, 0xc00081e0e0}, 0xc000170100)
tailscale.com@v1.34.0/ipn/localapi/localapi.go:660 +0x215
tailscale.com/ipn/localapi.(*Handler).ServeHTTP(0xc000358300, {0xfbdfb0, 0xc00081e0e0}, 0xc000170100)
tailscale.com@v1.34.0/ipn/localapi/localapi.go:168 +0x472
tailscale.com/ipn/ipnserver.(*Server).serveHTTP(0xc000214000, {0xfbdfb0?, 0xc00081e0e0}, 0xc000170100)
tailscale.com@v1.34.0/ipn/ipnserver/server.go:203 +0x5f2
net/http.HandlerFunc.ServeHTTP(0xc00035013f?, {0xfbdfb0?, 0xc00081e0e0?}, 0x46972e?)
net/http/server.go:2109 +0x2f
net/http.serverHandler.ServeHTTP({0xc00031d410?}, {0xfbdfb0, 0xc00081e0e0}, 0xc000170100)
net/http/server.go:2947 +0x30c
net/http.(*conn).serve(0xc00065cbe0, {0xfbf020, 0xc000687470})
net/http/server.go:1991 +0x607
created by net/http.(*Server).Serve
net/http/server.go:3102 +0x4db
goroutine 148 [IO wait, 13 minutes]:
internal/poll.runtime_pollWait(0x7fe83c2f0640, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc0002d6700?, 0xc00071d000?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0002d6700, {0xc00071d000, 0x1000, 0x1000})
internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0002d6700, {0xc00071d000?, 0xc0003126c0?, 0x6b97a5?})
net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc00009e088, {0xc00071d000?, 0x0?, 0x0?})
net/net.go:183 +0x45
net/http.(*connReader).Read(0xc0006869c0, {0xc00071d000, 0x1000, 0x1000})
net/http/server.go:786 +0x171
bufio.(*Reader).fill(0xc00080c960)
bufio/bufio.go:106 +0xff
bufio.(*Reader).ReadSlice(0xc00080c960, 0x0?)
bufio/bufio.go:372 +0x2f
bufio.(*Reader).ReadLine(0xc00080c960)
bufio/bufio.go:401 +0x27
net/textproto.(*Reader).readLineSlice(0xc000686000)
net/textproto/reader.go:56 +0x99
net/textproto.(*Reader).ReadLine(...)
net/textproto/reader.go:37
net/http.readRequest(0xc00009e088?)
net/http/request.go:1036 +0x79
net/http.(*conn).readRequest(0xc00065c0a0, {0xfbef78, 0xc0002dabc0})
net/http/server.go:994 +0x24a
net/http.(*conn).serve(0xc00065c0a0, {0xfbf020, 0xc000686930})
net/http/server.go:1916 +0x345
created by net/http.(*Server).Serve
net/http/server.go:3102 +0x4db
goroutine 870 [select]:
golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc0001c4780, 0xc000200300)
golang.org/x/net@v0.1.0/http2/transport.go:1200 +0x451
tailscale.com/control/controlclient.(*noiseConn).RoundTrip(...)
tailscale.com@v1.34.0/control/controlclient/noise.go:49
tailscale.com/control/controlclient.(*NoiseClient).RoundTrip(0x22000000000100?, 0xc000200300)
tailscale.com@v1.34.0/control/controlclient/noise.go:274 +0x65
net/http.send(0xc000200300, {0xfb4c80, 0xc00031a240}, {0xe13720?, 0x1?, 0x0?})
net/http/client.go:251 +0x5f7
net/http.(*Client).send(0xc0006de6c0, 0xc000200300, {0x203000?, 0xc0006de780?, 0x0?})
net/http/client.go:175 +0x9b
net/http.(*Client).do(0xc0006de6c0, 0xc000200300)
net/http/client.go:715 +0x8fc
net/http.(*Client).Do(0xfbef78?, 0xc000096740?)
net/http/client.go:581 +0x19
tailscale.com/control/controlclient.(*Direct).doLogin(0xc00079a000, {0xfbef78, 0xc000096740}, {0x0, 0x2, 0x0, {0x0, 0x0}, 0x0, 0x0, ...})
tailscale.com@v1.34.0/control/controlclient/direct.go:583 +0x1e02
tailscale.com/control/controlclient.(*Direct).doLoginOrRegen(0x406bc5?, {0xfbef78, 0xc000096740}, {0x0, 0x2, 0x0, {0x0, 0x0}, 0x0, 0x0, ...})
tailscale.com@v1.34.0/control/controlclient/direct.go:366 +0x58
tailscale.com/control/controlclient.(*Direct).TryLogin(0xc00079a000, {0xfbef78, 0xc000096740}, 0x0, 0x2)
tailscale.com@v1.34.0/control/controlclient/direct.go:354 +0x133
tailscale.com/control/controlclient.(*Auto).authRoutine(0xc0006500e0)
tailscale.com@v1.34.0/control/controlclient/auto.go:340 +0x4be
created by tailscale.com/control/controlclient.(*Auto).Start
tailscale.com@v1.34.0/control/controlclient/auto.go:155 +0x5a
goroutine 213 [IO wait, 13 minutes]:
internal/poll.runtime_pollWait(0x7fe83c43aa50, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc0002d7380?, 0xc00031db11?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0002d7380, {0xc00031db11, 0x1, 0x1})
internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0002d7380, {0xc00031db11?, 0x0?, 0x0?})
net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc00009e2e8, {0xc00031db11?, 0x0?, 0x0?})
net/net.go:183 +0x45
net/http.(*connReader).backgroundRead(0xc00031db00)
net/http/server.go:678 +0x3f
created by net/http.(*connReader).startBackgroundRead
net/http/server.go:674 +0xca
goroutine 916 [IO wait]:
internal/poll.runtime_pollWait(0x7fe83c43a0f0, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc000337080?, 0xc0000300b0?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000337080, {0xc0000300b0, 0x3, 0x3})
internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc000337080, {0xc0000300b0?, 0x40d03f?, 0xc00058c000?})
net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc00009e098, {0xc0000300b0?, 0x80?, 0xc000056400?})
net/net.go:183 +0x45
net/http.(*readWriteCloserBody).Read(0xc000361b60?, {0xc0000300b0?, 0x14dfc20?, 0xca5fa0?})
net/http/transport.go:2392 +0x8f
tailscale.com/net/netutil.wrappedConn.Read(...)
tailscale.com@v1.34.0/net/netutil/netutil.go:109
tailscale.com/control/controlbase.(*Conn).readNLocked(0xc000030000, 0x3)
tailscale.com@v1.34.0/control/controlbase/conn.go:116 +0xfc
tailscale.com/control/controlbase.(*Conn).decryptOneLocked(0xc000030000)
tailscale.com@v1.34.0/control/controlbase/conn.go:224 +0x205
tailscale.com/control/controlbase.(*Conn).Read(0xc000030000, {0xc00058d000, 0x1000, 0x8fb400?})
tailscale.com@v1.34.0/control/controlbase/conn.go:254 +0x12d
tailscale.com/control/controlclient.(*noiseConn).Read(0xc000361c40?, {0xc00058d000?, 0xc000361cf8?, 0x8d6169?})
tailscale.com@v1.34.0/control/controlclient/noise.go:85 +0x87
bufio.(*Reader).Read(0xc000572c60, {0xc0006502e0, 0x9, 0xfc2c88?})
bufio/bufio.go:237 +0x1bb
io.ReadAtLeast({0xfb2200, 0xc000572c60}, {0xc0006502e0, 0x9, 0x9}, 0x9)
io/io.go:332 +0x9a
io.ReadFull(...)
io/io.go:351
golang.org/x/net/http2.readFrameHeader({0xc0006502e0?, 0x9?, 0xc0001c47d8?}, {0xfb2200?, 0xc000572c60?})
golang.org/x/net@v0.1.0/http2/frame.go:237 +0x6e
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0006502a0)
golang.org/x/net@v0.1.0/http2/frame.go:498 +0x95
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000361f98)
golang.org/x/net@v0.1.0/http2/transport.go:2156 +0x130
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0001c4780)
golang.org/x/net@v0.1.0/http2/transport.go:2051 +0x6f
created by golang.org/x/net/http2.(*Transport).newClientConn
golang.org/x/net@v0.1.0/http2/transport.go:748 +0xaaa
goroutine 871 [select]:
tailscale.com/control/controlclient.(*Auto).mapRoutine(0xc0006500e0)
tailscale.com@v1.34.0/control/controlclient/auto.go:455 +0x6b1
created by tailscale.com/control/controlclient.(*Auto).Start
tailscale.com@v1.34.0/control/controlclient/auto.go:156 +0x97
goroutine 926 [IO wait]:
internal/poll.runtime_pollWait(0x7fe83c43ab40, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc000337580?, 0xc00031d8a1?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000337580, {0xc00031d8a1, 0x1, 0x1})
internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc000337580, {0xc00031d8a1?, 0xc0002c4280?, 0x0?})
net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc000354130, {0xc00031d8a1?, 0x0?, 0xc0000acb40?})
net/net.go:183 +0x45
net/http.(*connReader).backgroundRead(0xc00031d890)
net/http/server.go:678 +0x3f
created by net/http.(*connReader).startBackgroundRead
net/http/server.go:674 +0xca
goroutine 920 [IO wait]:
internal/poll.runtime_pollWait(0x7fe83c2f0550, 0x72)
runtime/netpoll.go:305 +0x89
internal/poll.(*pollDesc).wait(0xc000337380?, 0xc0005b6000?, 0x0)
internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000337380, {0xc0005b6000, 0x1300, 0x1300})
internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc000337380, {0xc0005b6000?, 0xc000318b80?, 0xc0005b6005?})
net/fd_posix.go:55 +0x29
net.(*conn).Read(0xc00009e140, {0xc0005b6000?, 0xc00053d7f0?, 0xabcaaa?})
net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc000366180, {0xc0005b6000?, 0x0?, 0x40d387?})
crypto/tls/conn.go:787 +0x3d
bytes.(*Buffer).ReadFrom(0xc000562278, {0xfb2360, 0xc000366180})
bytes/buffer.go:202 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000562000, {0xfb45c0?, 0xc00009e140}, 0x2?)
crypto/tls/conn.go:809 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc000562000, 0x0)
crypto/tls/conn.go:616 +0x116
crypto/tls.(*Conn).readRecord(...)
crypto/tls/conn.go:582
crypto/tls.(*Conn).Read(0xc000562000, {0xc0005a5000, 0x1000, 0x1?})
crypto/tls/conn.go:1287 +0x16f
net/http.(*persistConn).Read(0xc000788240, {0xc0005a5000?, 0x44ad40?, 0xc00053dec8?})
net/http/transport.go:1940 +0x4e
bufio.(*Reader).fill(0xc00030ea20)
bufio/bufio.go:106 +0xff
bufio.(*Reader).Peek(0xc00030ea20, 0x1)
bufio/bufio.go:144 +0x5d
net/http.(*persistConn).readLoop(0xc000788240)
net/http/transport.go:2104 +0x1ac
created by net/http.(*Transport).dialConn
net/http/transport.go:1762 +0x16de
goroutine 917 [select]:
golang.org/x/net/http2.(*clientStream).writeRequest(0xc0001c4a80, 0xc000200300)
golang.org/x/net@v0.1.0/http2/transport.go:1368 +0x9c7
golang.org/x/net/http2.(*clientStream).doRequest(0xfbf020?, 0xc00002ec30?)
golang.org/x/net@v0.1.0/http2/transport.go:1230 +0x1e
created by golang.org/x/net/http2.(*ClientConn).RoundTrip
golang.org/x/net@v0.1.0/http2/transport.go:1159 +0x30a
goroutine 921 [select]:
net/http.(*persistConn).writeLoop(0xc000788240)
net/http/transport.go:2403 +0xf5
created by net/http.(*Transport).dialConn
net/http/transport.go:1763 +0x1731
#
"creating new noise client " is logged from getNoiseClient()
https://github.com/tailscale/tailscale/blob/c5ef9103d92b9b0a2932f869db56fbd7344ad417/control/controlclient/direct.go#L1492
c.logf("creating new noise client")
nc, err := NewNoiseClient(k, serverNoiseKey, c.serverURL, c.dialer, dp)
if err != nil {
return nil, err
}
It is most likely being called from doLogin()
https://github.com/tailscale/tailscale/blob/c5ef9103d92b9b0a2932f869db56fbd7344ad417/control/controlclient/direct.go#L419
func (c *Direct) doLogin(ctx context.Context, opt loginOpt) (mustRegen bool, newURL string, nks tkatype.MarshaledSignature, err error) {
c.mu.Lock()
...
if serverNoiseKey.IsZero() {
httpc = c.httpc
url = fmt.Sprintf("%s/machine/%s", c.serverURL, machinePrivKey.Public().UntypedHexString())
} else {
request.Version = tailcfg.CurrentCapabilityVersion
httpc, err = c.getNoiseClient()
if err != nil {
return regen, opt.URL, nil, fmt.Errorf("getNoiseClient: %w", err)
}
url = fmt.Sprintf("%s/machine/register", c.serverURL)
url = strings.Replace(url, "http:", "https:", 1)
}
goroutine 870
is trying to POST a RegisterRequest at direct.go:583
https://github.com/tailscale/tailscale/blob/c5ef9103d92b9b0a2932f869db56fbd7344ad417/control/controlclient/direct.go#L583
bodyData, err := encode(request, serverKey, serverNoiseKey, machinePrivKey)
if err != nil {
return regen, opt.URL, nil, err
}
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(bodyData))
if err != nil {
return regen, opt.URL, nil, err
}
res, err := httpc.Do(req)
if err != nil {
return regen, opt.URL, nil, fmt.Errorf("register request: %w", err)
}
It appears to be stuck in the HTTP client logic.
goroutine 870
stuck in this select {
on line 1200
:
https://cs.opensource.google/go/x/net/+/refs/tags/v0.1.0:http2/transport.go;l=1199-1224
for {
select {
case <-cs.respHeaderRecv:
return handleResponseHeaders()
case <-cs.abort:
select {
case <-cs.respHeaderRecv:
// If both cs.respHeaderRecv and cs.abort are signaling,
// pick respHeaderRecv. The server probably wrote the
// response and immediately reset the stream.
// golang.org/issue/49645
return handleResponseHeaders()
default:
waitDone()
return nil, cs.abortErr
}
case <-ctx.Done():
err := ctx.Err()
cs.abortStream(err)
return nil, err
case <-cs.reqCancel:
cs.abortStream(errRequestCanceled)
return nil, errRequestCanceled
}
}
}
From within the WSL2 container where tailscaled
is running, we see:
# tcpdump -n -vvv -S -r /tmp/control.pcap host 18.157.173.201
21:17:43.259808 IP (tos 0x0, ttl 37, id 0, offset 0, flags [DF], proto TCP (6), length 134)
18.157.173.201.80 > 192.168.65.4.36220: Flags [P.], cksum 0x297f (correct), seq 866263425:866263519, ack 1189053263, win 65535, length 94: HTTP, length: 94
HTTP/1.1 101 Switching Protocols
Connection: upgrade
Upgrade: tailscale-control-protocol
21:17:43.260202 IP (tos 0x0, ttl 64, id 36029, offset 0, flags [DF], proto TCP (6), length 40)
192.168.65.4.36220 > 18.157.173.201.80: Flags [.], cksum 0xc22d (incorrect -> 0x3e4f), seq 1189053263, ack 866263519, win 502, length 0
21:17:43.260898 IP (tos 0x0, ttl 37, id 0, offset 0, flags [DF], proto TCP (6), length 304)
18.157.173.201.80 > 192.168.65.4.36220: Flags [P.], cksum 0x730a (correct), seq 866263519:866263783, ack 1189053263, win 65535, length 264: HTTP
21:17:43.260973 IP (tos 0x0, ttl 64, id 36030, offset 0, flags [DF], proto TCP (6), length 40)
192.168.65.4.36220 > 18.157.173.201.80: Flags [.], cksum 0xc22d (incorrect -> 0x3d48), seq 1189053263, ack 866263783, win 501, length 0
21:17:43.266769 IP (tos 0x0, ttl 64, id 36031, offset 0, flags [DF], proto TCP (6), length 123)
192.168.65.4.36220 > 18.157.173.201.80: Flags [P.], cksum 0xc280 (incorrect -> 0x8adb), seq 1189053263:1189053346, ack 866263783, win 501, length 83: HTTP
21:17:43.269002 IP (tos 0x0, ttl 64, id 36032, offset 0, flags [DF], proto TCP (6), length 129)
192.168.65.4.36220 > 18.157.173.201.80: Flags [P.], cksum 0xc286 (incorrect -> 0xbdac), seq 1189053346:1189053435, ack 866263783, win 501, length 89: HTTP
21:17:43.278216 IP (tos 0x0, ttl 64, id 36033, offset 0, flags [DF], proto TCP (6), length 1009)
192.168.65.4.36220 > 18.157.173.201.80: Flags [P.], cksum 0xc5f6 (incorrect -> 0x3e20), seq 1189053435:1189054404, ack 866263783, win 501, length 969: HTTP
21:17:43.279608 IP (tos 0x0, ttl 37, id 0, offset 0, flags [DF], proto TCP (6), length 40)
18.157.173.201.80 > 192.168.65.4.36220: Flags [.], cksum 0x3eea (correct), seq 866263783, ack 1189053346, win 65535, length 0
21:17:43.280336 IP (tos 0x0, ttl 64, id 36034, offset 0, flags [DF], proto TCP (6), length 68)
192.168.65.4.36220 > 18.157.173.201.80: Flags [P.], cksum 0xc249 (incorrect -> 0xb592), seq 1189054404:1189054432, ack 866263783, win 501, length 28: HTTP
21:17:43.289579 IP (tos 0x0, ttl 37, id 0, offset 0, flags [DF], proto TCP (6), length 40)
18.157.173.201.80 > 192.168.65.4.36220: Flags [.], cksum 0x3ba6 (correct), seq 866263783, ack 1189054432, win 65285, length 0
192.168.65.4 is the IP address of the eth0
interface inside the container. We'd expect to see it be NATted by the Windows host. 18.157.173.201 is one of the IP addresses associated with controlplane.tailscale.com
From Wireshark running on the Windows host, we see:
% tcpdump -n -v -r ./windows.pcap
reading from file ./windows.pcap, link-type EN10MB (Ethernet)
13:17:43.251225 IP (tos 0x0, ttl 44, id 10843, offset 0, flags [DF], proto TCP (6), length 134)
18.157.173.201.80 > 10.1.10.165.59815: Flags [P.], cksum 0xd797 (correct), seq 3218801998:3218802092, ack 220736987, win 488, length 94: HTTP, length: 94
HTTP/1.1 101 Switching Protocols
Connection: upgrade
Upgrade: tailscale-control-protocol
13:17:43.251812 IP (tos 0x0, ttl 44, id 10844, offset 0, flags [DF], proto TCP (6), length 252)
18.157.173.201.80 > 10.1.10.165.59815: Flags [P.], cksum 0x0bf3 (correct), seq 94:306, ack 1, win 488, length 212: HTTP
13:17:43.251812 IP (tos 0x0, ttl 44, id 10845, offset 0, flags [DF], proto TCP (6), length 92)
18.157.173.201.80 > 10.1.10.165.59815: Flags [P.], cksum 0x02b2 (correct), seq 306:358, ack 1, win 488, length 52: HTTP
13:17:43.251920 IP (tos 0x0, ttl 128, id 8939, offset 0, flags [DF], proto TCP (6), length 40, bad cksum 0 (->2d9)!)
10.1.10.165.59815 > 18.157.173.201.80: Flags [.], cksum 0xd526 (incorrect -> 0xeb3c), ack 358, win 1025, length 0
13:17:43.276081 IP (tos 0x0, ttl 128, id 8940, offset 0, flags [DF], proto TCP (6), length 123, bad cksum 0 (->285)!)
10.1.10.165.59815 > 18.157.173.201.80: Flags [P.], cksum 0xd579 (incorrect -> 0x38d0), seq 1:84, ack 358, win 1025, length 83: HTTP
13:17:43.290283 IP (tos 0x0, ttl 128, id 8941, offset 0, flags [DF], proto TCP (6), length 129, bad cksum 0 (->27e)!)
10.1.10.165.59815 > 18.157.173.201.80: Flags [P.], cksum 0xd57f (incorrect -> 0x6ba1), seq 84:173, ack 358, win 1025, length 89: HTTP
13:17:43.291870 IP (tos 0x0, ttl 128, id 8942, offset 0, flags [DF], proto TCP (6), length 1009, bad cksum 0 (->ff0c)!)
10.1.10.165.59815 > 18.157.173.201.80: Flags [P.], cksum 0xd8ef (incorrect -> 0xec14), seq 173:1142, ack 358, win 1025, length 969: HTTP
13:17:43.293486 IP (tos 0x0, ttl 128, id 8943, offset 0, flags [DF], proto TCP (6), length 68, bad cksum 0 (->2b9)!)
10.1.10.165.59815 > 18.157.173.201.80: Flags [P.], cksum 0xd542 (incorrect -> 0x6387), seq 1142:1170, ack 358, win 1025, length 28: HTTP
13:17:43.439144 IP (tos 0x0, ttl 44, id 10846, offset 0, flags [DF], proto TCP (6), length 40)
18.157.173.201.80 > 10.1.10.165.59815: Flags [.], cksum 0xed02 (correct), ack 84, win 488, length 0
13:17:43.452941 IP (tos 0x0, ttl 44, id 10847, offset 0, flags [DF], proto TCP (6), length 40)
18.157.173.201.80 > 10.1.10.165.59815: Flags [.], cksum 0xeca9 (correct), ack 173, win 488, length 0
13:17:43.453667 IP (tos 0x0, ttl 44, id 10848, offset 0, flags [DF], proto TCP (6), length 40)
18.157.173.201.80 > 10.1.10.165.59815: Flags [.], cksum 0xe8e7 (correct), ack 1142, win 481, length 0
13:17:43.456230 IP (tos 0x0, ttl 44, id 10849, offset 0, flags [DF], proto TCP (6), length 40)
18.157.173.201.80 > 10.1.10.165.59815: Flags [.], cksum 0xe8cb (correct), ack 1170, win 481, length 0
13:17:53.253002 IP (tos 0x0, ttl 44, id 10850, offset 0, flags [DF], proto TCP (6), length 40)
18.157.173.201.80 > 10.1.10.165.59815: Flags [F.], cksum 0xe8ca (correct), seq 358, ack 1170, win 481, length 0
13:17:53.253285 IP (tos 0x0, ttl 128, id 8944, offset 0, flags [DF], proto TCP (6), length 40, bad cksum 0 (->2d4)!)
10.1.10.165.59815 > 18.157.173.201.80: Flags [.], cksum 0xd526 (incorrect -> 0xe6aa), ack 359, win 1025, length 0
A guess: maybe this is the MTU-in-wsl2 issue? (https://github.com/tailscale/tailscale/issues/4833, etc)
A guess: maybe this is the MTU-in-wsl2 issue? (https://github.com/tailscale/tailscale/issues/4833, etc)
I've tried setting the MTU inside the container both to ip link set dev eth0 mtu 1500
and to ip link set dev eth0 mtu 1280
, as described in https://github.com/tailscale/tailscale/issues/4833, with no change in behavior.
My current theory is:
101 Switching Protocols
to the Noise transport which ts2021 uses.A TCP proxy wouldn't care about 101 protocol switch.
And an http proxy wouldn't seen the switch we do inside TLS.
So I doubt it's a proxy.
After banging my head against this long enough, I found the setting causing this to fail.
In %APPDATA%\Docker\settings.json
, there's an option "vpnKitTransparentProxy": true,
. By setting this to false
and restarting Docker Desktop, everything started working as expected.
Hopefully, this gives us something to go off of for an actual fix. For those blocked, I hope this helps you get going again!
Narrator: it was a proxy.
Wow.
But what type, exactly? An impressively invasive enough one that we weren't able to get through, apparently. The whole port 80 thing we do is just an optimization, but we don't trust it, as plenty of other things mess with port 80 too. We should always fall back to https 443 if 80 doesn't work end-to-end in time. But if vpnKit "transparent" proxy is breaking HTTPS too, that doesn't seem very transparent.
It's unclear to me what tailscaled
is trying to do. I'll have to take a look a closer look at my pcaps. I saw the TLS v1.3 Client Hello
and the ts2021
over port 80
, followed by HTTP 101 Switching Protocols
.
From what I could see in the pcaps, tailscaled
stopped trying to use port 443 and closed the connection. It tried to 101 Switch Protocol
on the port 80 connection, which then stopped transferring data.
If that is all correct, I think it means tailscaled
needs to keep the port 443 connection around and be ready to give up on Noise-over-HTTP longer than it currently does.
Exciting! If there's a bug in that code, fixing it'll likely help plenty of other messing-with-port-80 cases too.
https://github.com/tailscale/tailscale/commit/1237000efe8ebb8d801d72d78a86ae31a5917031 (https://github.com/tailscale/tailscale/pull/4559) for https://github.com/tailscale/tailscale/issues/4557 still seems fine.
I can look more tomorrow with fresh eyes.
How can I repro? I've never used Docker Desktop or Docker on Windows.
@bradfitz I have an Azure VM with Docker Desktop installed that encounters the issue. I'll ping you on Slack with the RDP connection details.
Docker Desktop's "transparent" HTTP proxy has been modified recently. It's supposed to:
If there's a simple way to repro (or if you have a .pcap), I can take a look at the Docker side of things.
After banging my head against this long enough, I found the setting causing this to fail.
In
%APPDATA%\Docker\settings.json
, there's an option"vpnKitTransparentProxy": true,
. By setting this tofalse
and restarting Docker Desktop, everything started working as expected.Hopefully, this gives us something to go off of for an actual fix. For those blocked, I hope this helps you get going again!
I can confirm that setting "vpnKitTransparentProxy" works and I'm able to login. Thank you for the solution, thanks for the insights into golang. I'm watching with excitement :)
If there's a simple way to repro (or if you have a .pcap), I can take a look at the Docker side of things.
The simplest way to reproduce it is to install the Tailscale extension in Docker Desktop for Windows and click the button on the left to log in using a browser. That is supposed to:
controlplane.tailscale.com
, with a few round-trip exchangesYou do not need to log in to reproduce, the problem happens before the browser even opens. You'll see the button spinning, because the connection to controlplane.tailscale.com
gets stuck without transferring data.
How can I repro? I've never used Docker Desktop or Docker on Windows.
tailscaled --tun=userspace-networking
running in it. The hamburger menu has an "Open in terminal" selection, and once the local terminal is open there is a link to open in an external Terminal like Powershell.My method for poking and prodding:
If "Show Docker Extensions System Containers" is checked, you can get the extension's name from a PowerShell prompt via docker ps
. Then, you can docker exec -it <container-name> bash
to get interactive access to a shell inside the extension's container.
The extension is configured and started via a docker-compose.yml
file inside of %appdata%\Docker\extensions\tailscale_docker-extension\vm
. For changes to get picked up, I'd have to docker rm -f <container>
and then restart Docker Desktop.
@djs55, I can reproduce now with a custom build of Tailscale with more debug logs and it appears that what's happening is that the port 80 proxy stops copying upstream after awhile.
We do the HTTP/1.1 101 Switching Protocols over port 80 and switch to our Noise crypto thing and it works bi-di for a bit (enough to do the Noise handshake and get a server->client message down), but then our client stops being able to write on port 80 up the server.
It's like the port 80 proxy has decided that the HTTP request is done (reached its length?) and there's no more need for a copy. I'll try tricking it with different Content-Length
request headers, etc. But ideally it'd be fixed.
Frustratingly I've not been able to repro it locally with Docker Desktop v4.16.1 (95567). Running tcpdump in the VM (docker run --privileged --net=host -v C:\out:/out -w /out -it alpine
then apk add tcpdump
and tcpdump -s 0 -n -w tailscale.pcap
) and then uninstalling, reinstalling the extension and logging in I can see the request, response and then the port 80 connection stays open (for me).
Perhaps the next time someone hits this, if they could upload a set of diagnostics (whale menu -> Troubleshoot -> Get Support -> Upload and then quote the ID) the logs might point me in the right direction.
Looking at the code, I think it would terminate the connection early if the system is using an HTTP proxy (configured in the Docker UI or detected in Windows) but if the Windows-side pcaps are showing direct connections to the expected IPs then it can't be on that path. It should be forwarding the request (having peeked at it) and then be stuck in a byte-level proxy loop.
Edit to add: my image is tailscale/docker-extension:0.0.15 (in case that's useful to know)
@djs55, I pushed a debug branch (https://github.com/tailscale/tailscale/tree/bradfitz/noise_debug_more) that you can use to repro locally.
In a container with Go 1.19 available, run:
$ go run tailscale.com/cmd/tailscale@6e4f9a4a7538a7e15946d57e4df1288873f66f65 debug ts2021 --verbose
And it should hang. The last thing I see is:
12:05:27.397755 final underlying conn: 10.0.0.52:53344 / 18.193.255.254:80
12:05:27.397785 early payload: "{\"nodeKeyChallenge\":\"chalpub:eb65d5a0220047672fa948e82a4096ef1efb1fa49f2646ef7377c3e0319ef122\"}"
and then IIRC also:
12:05:27.397910 made thing
... and then hangs.
But then run it with the new env var TS_FORCE_NOISE_443=1
set and it'll work, ending with something like:
HTTP/2.0 200 OK
Content-Length: 70
Access-Control-Allow-Origin: *
Content-Type: text/plain; charset=utf-8
Date: Tue, 17 Jan 2023 20:05:27 GMT
Strict-Transport-Security: max-age=63072000; includeSubDomains
mkey:1773d5a41583355e296ed1b02938d58a9bb9142d5b463844f4f0e75cd3603012
You can also set TS_DEBUG_NOISE_DIAL=1
to get a bit more logs.
@bradfitz thanks very much, I got the hang. I'll dig in more tomorrow.
We discussed the bug on Slack with the Docker team and have a dev build of Docker Desktop that fixes the issue—waiting on the next steps.
@djs55, can you or somebody else share the details of the fix, for my own curiosity? Also it might help us work around similar problems from other proxies in the future.
Edit: or bradfitz at tailscale.com if you can share but not here.
Docker Desktop for Windows version 4.17.0 just got released. Extension login works now! (version 0.0.15)
I am closing the issue as this is now resolved.
Well, even with v4.28.0
I am still having the exact same issue.
welp guess we are back at square one. and this time disabling "vpnKitTransparentProxy" won't work.
welp guess we are back at square one. and this time disabling "vpnKitTransparentProxy" won't work.
It is working for me (v4.28.0, Win11/MSEdge)
After upgrading to Docker Desktop v4.15.0, Tailscale extension no longer connects. I tried a clean reinstall of Docker Desktop to no avail.
It may also worth mentioning that I have
containerd
feature enabled.
olá pessoal, eu venho trazer uma solução rapida e facil, estive tambem me debatendo durante muito tempo, porem depois de alguns pensamentos pensei: "se nos usamos o {tailscale up} para obter o login, porque nao usarlo no bash" então aqui está a solução mais simples e rapida:
Hi guys, I come to bring a quick and easy solution, I was also debating for a long time, but after some thoughts I thought: "if we use {tailscale up} to get the login, why not use it in bash" so here it is the simplest and fastest solution:
In additional to MayDayCaio's solution:
In Docker Desktop go to Settings (upper right corner) > Extensions > Show Docker Extensions system containers. (without it you cannot see the extension container)
Then go to terminal:
docker ps
to find container_id of tailscale/docker-extension:1.0.2
docker exec -it container_id /bin/sh
/app/tailscale up
copy/paste the link
After upgrading to Docker Desktop v4.15.0, Tailscale extension no longer connects. I tried a clean reinstall of Docker Desktop to no avail.
It may also worth mentioning that I have
containerd
feature enabled.