Open isaac-mcfadyen opened 1 year ago
Also facing the issue at the moment. Is there any workaround/solution for the issue?
Yeah could do with an update on this.
I seem to be getting the same behaviour on a GET as well. NOTE: I'm only seeing this while developing locally with wrangler. I published the service, and I did see the upgrade header come through with a GET. So maybe it's just a local development issue, could someone else confirm that?
EDIT: Playing around a little more, I changed from fetch to axios. and the header was pushed through with no problem locally.
I'm running into the same issue with Headscale behind a Cloudflare tunnel in combination with Tailscale clients.
Hitting the same thing on a Headscale server that was working perfectly up until ~a week or two ago
+1, not seeing Upgrade
come through. I'm using fetch('https://…worker-url', { headers: { Upgrade: 'websocket' } })
and the worker's fetch()
handler only shows null
for req.headers.get('Upgrade')
.
From what I understand as described in the WebSocket RFC for the client handshake:
The method of the request MUST be GET, and the HTTP version MUST be at least 1.1.
https://www.rfc-editor.org/rfc/rfc6455#section-4.1
I'm not sure what changed recently, but it could be the Front End gateway that was previously non-compliant but has been changed to be RFC-compliant. If you know anywhere else in the spec or an updated RFC that describes that POST is valid, please let me know.
I'm also facing this issue using an proxied server with cloudflare and the GET method
+1
Confirmed, but no feedback. This prevents hosting 'headscale':
2023-07-06T11:51:31Z debug http 500 Internal Server Error {"cfRay":"7e27a2e5de8cfb3c-SJC","connIndex":2,"content-length":15}
2023-07-06T11:51:32Z debug http POST https://headscale.[hostname]/ts2021 HTTP/1.1 {"cfRay":"7e27a2ea0c0ef9f1-SJC","connIndex":2,"content-length":0,"headers":{"Accept-Encoding":["gzip"],"Cdn-Loop":["cloudflare"],"Cf-Connecting-Ip":["111.201.215.195"],"Cf-Ipcountry":["CN"],"Cf-Ray":["7e27a2ea0c0ef9f1-SJC"],"Cf-Visitor":["{\"scheme\":\"https\"}"],"Cf-Warp-Tag-Id":["b452ff5a-eeab-4374-bc83-648230b5d408"],"Content-Length":["0"],"User-Agent":["Go-http-client/1.1"],"X-Forwarded-For":["111.201.215.195"],"X-Forwarded-Proto":["https"],"X-Tailscale-Handshake":["AD8BAGDVq9Uw8dCmqwYsajrsWPoljmy/5I7+C3SKqst39KLAOqpfSiDk5V6ETAmZ4gLxKzofGqmACbBArTawHtzmHe2cOWYjh/jimwwgCcF+88Re5BDcmgOLnBuHF08cJQUOfow="]},"host":"headscale.[hostname]","ingressRule":0,"path":"/ts2021"}
If you know anywhere else in the spec or an updated RFC that describes that POST is valid, please let me know.
Yeah, so from what I've heard from Headscale it's not RFC-compliant behavior but can't really be changed because it causes more work upstream for Tailscale (completely understandable). See https://github.com/juanfont/headscale/issues/1468
Wondering if this is an area that Tunnel should deviate from the RFC for? I can't imagine Headscale is the only platform using POST for Websockets.
Tunnel should not have to care what the traffic looks like over the connection. There are many non-spec implementations, for different purposes, like circumvention of surveillance, optimizations not covered by the spec, etc.
For example, tailscale allows for tcp forwarding without an issue for headscale, because they send raw tcp data as promised. Allowing raw tcp data is a possible option for tunnel?
Tunnel should not have to care what the traffic looks like over the connection.
If you want that behavior, with our product suite, you can use WARP-to-Tunnel which require you to use the WARP client on the user-side of this setup. We don't (currently) provide any non-identity based traffic into on-ramps into zero-trust networks. But, keep in mind that cloudflared supports both public and private networking traffic, so if you are attempting to use POST for WebSockets over public hostname traffic, we currently do not support that.
Allowing raw tcp data is a possible option for tunnel?
The ways that Cloudflare on-ramps raw TCP traffic into private networks is via cloudflared access tcp
or WARP. We (again, currently, we may investigate supporting something like this in the future) do not have a way to on-ramp public TCP traffic to a cloudflared-backed origin.
So to the original question: Would Cloudflare Tunnel (cloudflared specifically) deviate from the RFC to support POST WebSocket requests?
I'm leaning towards no, I don't think we want to support something non-standard where the implication could have other effects. For instance, WebSocket implementations immediately downgrade the connection after the WebSocket handshake and do not process the request body, having the POST request method would imply otherwise.
Is there another solution here? Perhaps you could write a shim service to rewrite the POST request method with GET after cloudflared proxies the request? I know it's an extra layer, but this sounds like a very niche combination of tunneling solutions you are attempting to combine (Headscale and Cloudflare Tunnel).
Thanks for the explanation.
Just to provide some more background, the solution described with "WARP-to-Tunnel" is not viable, as this needs Warp to work as the client ('VPN service') to access the coordination server. What people want is to expose the coordination server to a public endpoint using a tunnel, so the Tailscale node can reach this public service and allow the connection to be set up. A mobile device does not layer VPNs (and would also strongly suggest against this).
Good Evening. Does anyone know if this has been fixed? I'm having this same issue when trying to implement headscale through Cloudflare Tunneling. I followed the Nginx Proxy Manager config they mentioned on their website, but that didn't work ether. https://headscale.net/reverse-proxy/#nginx
I've not personally hit this for a while. But if you're upgrading, just make sure you're using ws:// protocol, and not http:// protocol. I did see someone hitting something similar in my travels when they tried to connect to a websocket destination using http, then it auto upgraded to https, which stripped the headers. May be a dead end, but food for thought... Good luck
from everything i can tell, this isnt a bug. this is something cloudflare support on their premium tier, it looks like they have brought this inline with their tunnel functionality and are putting it behind a paywall.
from everything i can tell, this isnt a bug. this is something cloudflare support on their premium tier, it looks like they have brought this inline with their tunnel functionality and are putting it behind a paywall.
Not sure what premium tier you're referring to?
I can confirm this issue still occurs on Enterprise + Zero Trust Gateway Enterprise though, so I don't think they've put this functionality behind a paywall in any way - it's just a bug.
Any news on this? I would love this feature too for headscale.
Hmmm, atm I can connect to my headscale server behind cloudflare proxy. Headscale version 0.23.0.
Hmmm, atm I can connect to my headscale server behind cloudflare proxy. Headscale version 0.23.0.
Can you tell us how you did it? Send the headscale config.yaml and (if you have) the docker-compose.yaml?
I saw headscale had an update about a week ago and I updated. There are some minor changes between 0.22.3 and 0.23.0 in config.yaml but nothing special.
Client os is ios and client version is 1.76.0.
Changed to a custom coordination server (used tailscale server before) and logged to my server.
edit: My bad, now I see this issue is for cloudflared. I use proxied dns record but had the same problem as cloudflared users.
Describe the bug
cloudflared
strips the Upgrade header so the websocket fails.To Reproduce Steps to reproduce the behavior:
--loglevel debug
flag so that you can see all incoming headers. No origin is required as long as you can see the incoming headers.example.com
->http://localhost:80
Expected behavior
Environment and versions
cloudflared version 2023.1.0 (built 2023-01-16-0850 UTC)
Logs and errors
Additional context