cloudflare / cloudflared

Cloudflare Tunnel client (formerly Argo Tunnel)
https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/tunnel-guide
Apache License 2.0
8.87k stars 778 forks source link

🐛 Upgrade header stripped on Websocket POST requests #883

Open isaac-mcfadyen opened 1 year ago

isaac-mcfadyen commented 1 year ago

Describe the bug

To Reproduce Steps to reproduce the behavior:

  1. Create a Tunnel, and run it with the --loglevel debug flag so that you can see all incoming headers. No origin is required as long as you can see the incoming headers.
  2. Make a Websocket connection via POST (this is used in applications such as self-hosted Tailscale AKA Headscale, might be a bit tricky to do manually however).
  3. Observe that there is no Upgrade header (required to establish Websockets) even though the client sends one
  4. Tunnel ID: 1c9fa586-d2e8-407d-a637-f578d90132c4
  5. Config: dashboard-managed, simple example.com -> http://localhost:80

Expected behavior

Environment and versions

Logs and errors

Additional context

neilxxxc commented 1 year ago

Also facing the issue at the moment. Is there any workaround/solution for the issue?

Lite5h4dow commented 1 year ago

Yeah could do with an update on this.

apowellnz commented 1 year ago

I seem to be getting the same behaviour on a GET as well. NOTE: I'm only seeing this while developing locally with wrangler. I published the service, and I did see the upgrade header come through with a GET. So maybe it's just a local development issue, could someone else confirm that?

EDIT: Playing around a little more, I changed from fetch to axios. and the header was pushed through with no problem locally.

m1no commented 1 year ago

I'm running into the same issue with Headscale behind a Cloudflare tunnel in combination with Tailscale clients.

bcspragu commented 1 year ago

Hitting the same thing on a Headscale server that was working perfectly up until ~a week or two ago

mglikesbikes commented 1 year ago

+1, not seeing Upgrade come through. I'm using fetch('https://…worker-url', { headers: { Upgrade: 'websocket' } }) and the worker's fetch() handler only shows null for req.headers.get('Upgrade').

DevinCarr commented 1 year ago

From what I understand as described in the WebSocket RFC for the client handshake:

The method of the request MUST be GET, and the HTTP version MUST be at least 1.1.

https://www.rfc-editor.org/rfc/rfc6455#section-4.1

I'm not sure what changed recently, but it could be the Front End gateway that was previously non-compliant but has been changed to be RFC-compliant. If you know anywhere else in the spec or an updated RFC that describes that POST is valid, please let me know.

JPBM135 commented 1 year ago

I'm also facing this issue using an proxied server with cloudflare and the GET method

vherrlein commented 1 year ago

+1

gbraad commented 1 year ago

Confirmed, but no feedback. This prevents hosting 'headscale':

2023-07-06T11:51:31Z debug http 500 Internal Server Error {"cfRay":"7e27a2e5de8cfb3c-SJC","connIndex":2,"content-length":15}
2023-07-06T11:51:32Z debug http POST https://headscale.[hostname]/ts2021 HTTP/1.1 {"cfRay":"7e27a2ea0c0ef9f1-SJC","connIndex":2,"content-length":0,"headers":{"Accept-Encoding":["gzip"],"Cdn-Loop":["cloudflare"],"Cf-Connecting-Ip":["111.201.215.195"],"Cf-Ipcountry":["CN"],"Cf-Ray":["7e27a2ea0c0ef9f1-SJC"],"Cf-Visitor":["{\"scheme\":\"https\"}"],"Cf-Warp-Tag-Id":["b452ff5a-eeab-4374-bc83-648230b5d408"],"Content-Length":["0"],"User-Agent":["Go-http-client/1.1"],"X-Forwarded-For":["111.201.215.195"],"X-Forwarded-Proto":["https"],"X-Tailscale-Handshake":["AD8BAGDVq9Uw8dCmqwYsajrsWPoljmy/5I7+C3SKqst39KLAOqpfSiDk5V6ETAmZ4gLxKzofGqmACbBArTawHtzmHe2cOWYjh/jimwwgCcF+88Re5BDcmgOLnBuHF08cJQUOfow="]},"host":"headscale.[hostname]","ingressRule":0,"path":"/ts2021"}
isaac-mcfadyen commented 1 year ago

If you know anywhere else in the spec or an updated RFC that describes that POST is valid, please let me know.

Yeah, so from what I've heard from Headscale it's not RFC-compliant behavior but can't really be changed because it causes more work upstream for Tailscale (completely understandable). See https://github.com/juanfont/headscale/issues/1468

Wondering if this is an area that Tunnel should deviate from the RFC for? I can't imagine Headscale is the only platform using POST for Websockets.

gbraad commented 1 year ago

Tunnel should not have to care what the traffic looks like over the connection. There are many non-spec implementations, for different purposes, like circumvention of surveillance, optimizations not covered by the spec, etc.

For example, tailscale allows for tcp forwarding without an issue for headscale, because they send raw tcp data as promised. Allowing raw tcp data is a possible option for tunnel?

DevinCarr commented 1 year ago

Tunnel should not have to care what the traffic looks like over the connection.

If you want that behavior, with our product suite, you can use WARP-to-Tunnel which require you to use the WARP client on the user-side of this setup. We don't (currently) provide any non-identity based traffic into on-ramps into zero-trust networks. But, keep in mind that cloudflared supports both public and private networking traffic, so if you are attempting to use POST for WebSockets over public hostname traffic, we currently do not support that.

Allowing raw tcp data is a possible option for tunnel?

The ways that Cloudflare on-ramps raw TCP traffic into private networks is via cloudflared access tcp or WARP. We (again, currently, we may investigate supporting something like this in the future) do not have a way to on-ramp public TCP traffic to a cloudflared-backed origin.

So to the original question: Would Cloudflare Tunnel (cloudflared specifically) deviate from the RFC to support POST WebSocket requests?

I'm leaning towards no, I don't think we want to support something non-standard where the implication could have other effects. For instance, WebSocket implementations immediately downgrade the connection after the WebSocket handshake and do not process the request body, having the POST request method would imply otherwise.

Is there another solution here? Perhaps you could write a shim service to rewrite the POST request method with GET after cloudflared proxies the request? I know it's an extra layer, but this sounds like a very niche combination of tunneling solutions you are attempting to combine (Headscale and Cloudflare Tunnel).

gbraad commented 1 year ago

Thanks for the explanation.

Just to provide some more background, the solution described with "WARP-to-Tunnel" is not viable, as this needs Warp to work as the client ('VPN service') to access the coordination server. What people want is to expose the coordination server to a public endpoint using a tunnel, so the Tailscale node can reach this public service and allow the connection to be set up. A mobile device does not layer VPNs (and would also strongly suggest against this).

DCCInterstellar commented 8 months ago

Good Evening. Does anyone know if this has been fixed? I'm having this same issue when trying to implement headscale through Cloudflare Tunneling. I followed the Nginx Proxy Manager config they mentioned on their website, but that didn't work ether. https://headscale.net/reverse-proxy/#nginx

apowellnz commented 8 months ago

I've not personally hit this for a while. But if you're upgrading, just make sure you're using ws:// protocol, and not http:// protocol. I did see someone hitting something similar in my travels when they tried to connect to a websocket destination using http, then it auto upgraded to https, which stripped the headers. May be a dead end, but food for thought... Good luck

Lite5h4dow commented 4 months ago

from everything i can tell, this isnt a bug. this is something cloudflare support on their premium tier, it looks like they have brought this inline with their tunnel functionality and are putting it behind a paywall.

isaac-mcfadyen commented 4 months ago

from everything i can tell, this isnt a bug. this is something cloudflare support on their premium tier, it looks like they have brought this inline with their tunnel functionality and are putting it behind a paywall.

Not sure what premium tier you're referring to?

I can confirm this issue still occurs on Enterprise + Zero Trust Gateway Enterprise though, so I don't think they've put this functionality behind a paywall in any way - it's just a bug.