Open schmod opened 4 years ago
@chungthuang I'm seeing this as well using 2020.11.9
I've tested SSE works with our hello-world server https://github.com/cloudflare/cloudflared/blob/master/hello/hello.go#L190. Can you share another simple server I can test with?
I can reproduce this exact issue on my setup (Python + current version of Gunicorn, Waitress).
I see this with a "regular" webpage being server (Content-Type: text/html
), not with text/event-stream
.
These are the headers I receive connecting directly to the server process:
HTTP/1.1 200 OK
Server: gunicorn/20.0.4
Date: Mon, 30 Nov 2020 07:59:29 GMT
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
X-Content-Type-Options: nosniff
Note the chunked
transfer encoding.
The data loads incrementally in the browser.
These are the headers when going through Argo:
cf-cache-status: DYNAMIC
cf-ray: 5fa3088b5ff33756-MXP
cf-request-id: 06b9c3ab15000037569a0d2000000001
content-encoding: gzip
content-type: text/html; charset=UTF-8
date: Mon, 30 Nov 2020 08:00:12 GMT
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
nel: {"report_to":"cf-nel","max_age":604800}
report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report?s=%2Bcl2PMxbvgs16Wn5g6nwt3ULjAQ8jWG3h2vbdS3PdH3e%2FsZwv4%2B%2BQj1tvOniUJGxhM8dm2ou48Ntdnv0A5hocmGI2qHXRQUyVRtiO4o12XT2PwD%2BcjK5DHB7ZKe5cEVm"}],"group":"cf-nel","max_age":604800}
server: cloudflare
x-content-type-options: nosniff
In this case, the data still loads incrementally, however with much larger buffer size. (I don't know how to measure exactly.)
having a similar issue attempting this with Apache/PHP-FPM sitting in front of the Tunnel. When I check the response via ngrok tunnel or directly it works as expected but as soon I put CloudFlare in front of it (either via tunnel or traditionally via the CDN proxying the origin), it seems to be buffering 3-4K before sending each "chunk" even though we're using the "no chunked encoding" option.
this issue has been unresolved for almost 2 years now, any workarounds anyone found?
something to be mentioned here is that we're sending standard text/html
content-type not necessarily text/event-stream
but it seems cloudflared treats responses to that content-type differently, wonder if we can have an option/env variable or whatever to control that behavior for other (or all) content types.
ie: specify flushableContentTypes overrides via cli parameter or config?
I'm having trouble with SSE getting buffered, i.e. the header content-type is set to text/event-stream
but it still seems to take almost 2 or more minutes to get any response from the event-stream endpoint. Is that functionality working correctly?
Having the same trying to stream logs when proxying Nomad with Cloudflared. Works fine when proxied through Cloudflare normally.
Related: https://github.com/hashicorp/nomad/issues/5404#issuecomment-479230308
Same issue with https://connect.build
We use gRPC server streams extensively, and this issue is a blocker.
I am having the same issue with the latest version of cloudflared. Does anyone have a workaround for this?
I had same problem in .NET core implementation of SSE. Patch fix was to remove response headers "Connection" and "Content-Length" of SSE result. Also added "X-Accel-Buffering" :"no" for nginx reverse proxy. Then every request works like a charm.
I believe I encounter the same issue with React 18 streaming
Hey guys, did anyone manage to find a fix of this issue?
Came across this issue via a support ticket. I would ensure that that we're using:
res.setHeader('Content-Type', 'text/event-stream');
Came across this issue via a support ticket. I would ensure that that we're using:
res.setHeader('Content-Type', 'text/event-stream');
This is the answer. Add the header to your response and SSE will work as expected.
Just tested it on a project that was facing this exact issue.
I haven't tested this in awhile so maybe it works now, but I just wanted to point out that per the original post this was not working even when including the Content-Type: text/event-stream
header. As such, that's not actually the fix to the original issue.
We just ran an experiment by forking cloudflared and it's not just the flushableContentTypes
special-casing inside this repository. There must be internal cloudflare code that we can't modify that also does the same special-casing, because we're seeing the exact same behaviour before and after our fork: text/event-stream
and application/grpc
feed through correctly, but our actual content type breaks (in our case, application/x-ndjson
, as a 524.)
Testing via a request that reports transfer-encoding: chunked
but then closes the connection immediately, we note that in a non-special-cased request cloudflare injects a content-length
header, which it doesn't do in a special-cased request.
How about application/octet-stream
, it seems to be buffered as well.
Any idea to set something to avoid? Like proxy_buffering off;
in nginx.
How about
application/octet-stream
, it seems to be buffered as well. Any idea to set something to avoid? Likeproxy_buffering off;
in nginx.
The same as #1018
I have found a workaround for this, since I was also having similar issues. I'm running a Git-like service (https://github.com/xorrvin/gitoboros) which uses Chunked encoding and outputs some info in a stream. It's proxied by nginx. I'm using default settings for the tunnel.
The solution has two steps.
First, you need to mask original content type header (in my case application/x-git-*
) in nginx and add "streamable" one (like application/grpc
) instead:
location /your-endpoint/ {
proxy_pass http://your-backend;
# these two are needed so that nginx doesn't buffer the response internally
proxy_cache off;
proxy_buffering off;
# cloudflare tunnel hack: replace original Content-Type
proxy_hide_header Content-Type;
add_header Content-Type application/grpc;
Then, you need to go to your Cloudflare Dashboard, and set up few rules for that specific endpoint:
Rules -> Transform Rules -> Modify Response Header
and restore original content type there, like so:When incoming requests match...
(http.request.method eq "GET" and starts_with(http.request.uri, "/your-endpoint"))
Then...
Modify response header -> Set static -> Content-Type = "your original content type"
Caching -> Cache Rules
, to explicitly Bypass cache for /your-endpoint
That's it!
@xorrvin It works as expected. Thank you for the suggestion. I use a traefik
ingress controller and did the similar thing (like nginx)
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: ui-cloudflared-stream-header
spec:
headers:
customResponseHeaders:
Content-Type: text/event-stream
Then magic happens, response is not buffered anymore
In my case setting the HTTP header for Content-Type
doesn't work.
(I am running an ollama instance for LLM apps, the default response header for it is application/x-ndjson
.
I've been having difficulty streaming responses to clients using Server-Sent Events (SSE) while using Argo.
Argo (or Cloudflare) appears to be doing some sort of buffering that is preventing data from being streamed to clients incrementally.
Is there any way to opt out of this behavior?
For example, the following ExpressJS route (which sends an incrementing number to the client every 10ms) behaves very differently depending upon whether it's accessed directly or via an Argo tunnel.
cURLing this route directly:
cURLing this route over an Argo tunnel:
(Client on left – Server on right)