Open cyco130 opened 1 year ago
Which Deno version are you using?
1.35.1. Updated the comment.
Added system info.
Do you get the same results with v1.35.2? We recently fixed a bug in compression and I believe it was included in v1.35.2.
Sorry I thought I was on the latest (1.35.2 not available on brew yet).
But no change on 1.35.2.
We did some digging on our side - in case of gzip, our compressor buffers up to 64kB of data as it is a reasonable balance between the additional cost of compressing the data and performance improvements due to transmitting compressed data.
In case of your reproduction the overall time spent by the program is the same. But would you be able to verify you experience the same problem if the chunked data you're sending is bigger?
I think setting a fixed compression buffer size defeats the purpose of streaming: We want the user to be immediately able to see what's available.
For example, if we're streaming text, the user should be able to start reading the first page while the server retrieves the next page from a slow source. For streaming SSR, we send essential data with a placeholder for non-essential data which is slower to retrieve.
A fixed buffer size doesn't respect the application's "chunking" of its data. What's written should be immediately sent. Or at worst, there should be a timeout. Of course my example code is contrived because the chunks size is one byte but any chunk size that doesn't equals the buffer size would cause a problem.
I'm also maintaining a runtime-agnostic JavaScript server framework called HatTip. You can see that all other runtimes that support streaming are able to stream with the above code (which is part of the test suite):
# Cloudflare Workers
curl --compressed -ND - 'https://hattip-basic.rakkasjs.workers.dev/bin-stream?delay=50'
# Vercel Edge
curl --compressed -ND - 'https://hattip-basic-cyco130.vercel.app/bin-stream?delay=50'
# Deno Deploy (with std/http/serve, doesn't compress)
curl --compressed -ND - 'https://hattip-hello-hm6n9gcyd9z0.deno.dev/bin-stream?delay=50'
# Fastly Compute@Edge (doesn't compress, -k is needed because I'm lazy)
curl --compressed -kND - 'https://fastly.cyco130.com.global.prod.fastly.net/bin-stream?delay=50'
More info: Playing with queueing strategy didn't help either.
Thanks for reporting this. If it is truly the case it will make using Deno as a proxy (as in https://github.com/marc-barry/deno-spa-proxy) a very hard sell. My proxy uses https://hono.dev/ but nonetheless, I expect the results to be the same. I converted to Deno.serve
in https://github.com/marc-barry/deno-spa-proxy/pull/3 but then hit https://github.com/denoland/deno/issues/19690. Although, that one has since been addressed in https://github.com/denoland/deno/pull/19758.
Discussed with @mmastrac during today's CLI meeting. We're gonna add a timeout that flushes the compressed content after X ms if the buffer hasn't been filled up.
@bartlomieju What we're hitting is what Sebastian Markbåge is referring to here right? What we really need is a way for a stream to ask the next stream in the pipeline to flush but the web streams API doesn't have a way, do I understand correctly? Is there a proposal to that effect?
Deno.serve
buffers all output when compression is used (instead of streaming it) whileserve
fromstd/http/server.ts
always streams regardless of whether compression is used or not.More specifically, when an
Accept-Encoding: gz
header is sent,Deno.serve
disables streaming and buffers the (whole?) response. This makes it unusable for streaming SSR applications (I'm the maintainer of Rakkas, a React SSR framework) because all modern browsers support compression, causing the response to be buffered, defeating the purpose of streaming SSR.System info
Steps to reproduce
Save the script below in a file named
mod.ts
and the compare the outputs of the following four combinations:deno run -A mod.ts
(it will be usingstd/http/server.ts
):curl -ND - 'http://127.0.0.1:8000'
and observe the streaming response.curl --compressed -ND - 'http://127.0.0.1:8000'
and observe the streaming response.deno run -A mod.ts --deno-serve
(it will be usingDeno.serve
now):curl -ND - 'http://127.0.0.1:8000'
and observe the streaming response.curl --compressed -ND - 'http://127.0.0.1:8000'
and observe that the response is fully buffered.The code