Open niels-nijens opened 4 years ago
Thanks for reporting this!
Quick question -- do you use the Argo Tunnel Kubernetes Ingress Controller, or do you use cloudflared as a sidecar?
We're using cloudflared as a sidecar.
I am very curious if there was any triage and/or solution to this problem. I also experienced this with streaming connections through the cloudflare daemon, used in a sidecar implementation.
Niels' example is very clear on how to reproduce this - thanks, Niels!
Hope Cloudflare can help soon!
The same problem exists with open ended range requests, which can easily be used as a DoS vector causing bandwidth exhaustion.
To give some detail:
The problem happens with progressive streaming video clients, that start requestion an open ended range, eg. from offset 0 without an end, then read a few bytes, request another range without an end etc.
Cloudflared keeps on sending data to the edge, even though the client is no longer reading it.
@sssilver Should I split the open ended range request problem into a separate issue?
@sssilver Did you see my last question?
We recently released multi-instance cloudflared with this use case in mind. Check it out and let us know what you think: https://blog.cloudflare.com/highly-available-and-highly-scalable-cloudflare-tunnels/
Tutorial to get started as well: https://developers.cloudflare.com/cloudflare-one/tutorials/many-cfd-one-tunnel
Pretty sure i'm having a similar issue on cloudflare/cloudflared:2023.10.0. The tunnel does not seem to be passing the FIN packet to upstream on client disconnect.
Current behavior - tunnel client keeps half-open connections to upstream until upstream socket write. Write to a half-open connection results in the tunnel logging the following then disconnecting:
2023-12-31T22:09:40Z ERR Request failed error="context canceled" connIndex=3 dest=https://example.com/sse event=0 ip=REDACTED type=http 2023-12-31T22:09:42Z ERR error="context canceled" cfRay=REDACTED event=1 ingressRule=2 originService=https://example.com
The issue here is that a socket write from the upstream is required to identify the client disconnect. Curious who else is running into this with long-lived connections.
I should note this ONLY seems to be an issue with HTTP/3 (with QUIC) enabled - so possibly protocol translation from UDP CONNECTION_CLOSE frame to TCP (HTTP/2 OR HTTP/1.1) FIN packet. Let me know if any more information would be helpful.
Still an issue as of 2024.6.1. Same behavior where downstream client disconnects do not propagate to the upstream when HTTP/3 (with QUIC) is enabled for the domain.
We're successfully using the Cloudflare Argo tunnel for most projects in our inside a Kubernetes cluster, but are running into an issue when using Server-Sent Events (basically long living plaintext stream).
At the moment when a client/browser disconnects from a SSE stream the Nginx container and the underlying PHP-FPM container aren't notified of the disconnection. When checking the Cloudflared transport log, it's actually still sending data from the stream back to Cloudflare after a client has been disconnected.
I've created a repository to demonstrate this behavior: https://github.com/niels-nijens/cloudflared-sse-test