denoland / deno

A modern runtime for JavaScript and TypeScript.
https://deno.com
MIT License
93.98k stars 5.23k forks source link

automatic body compression for text/event-stream #22562

Open mfulton26 opened 6 months ago

mfulton26 commented 6 months ago

I figured out that I can compress an event stream myself using CompressionStream but I wonder if event streams can be candidates for automatic body compression?

Currently Deno supports gzip and brotli compression. A body is automatically compressed if the following conditions are true:

  • The request has an Accept-Encoding header which indicates the requester supports br for Brotli or gzip. Deno will respect the preference of the quality value in the header.
  • The response includes a Content-Type which is considered compressible. (The list is derived from jshttp/mime-db with the actual list in the code.)
  • The response body is greater than 64 bytes.
  1. gzip and deflate could be supported for text/event-stream
  2. plain text seems to me to be generally considered compressible
  3. a stream's response body's size is unknown but it is unlikely to be less than or equal to 64 bytes

I think it is fine to require stream authors to compress their streams themselves if automatic body compression isn't appropriate here. If that's the case then I wonder if any documentation/references add to the manual about automatic body compression would be helpful to call out that streams are not eligible for automatic compression but can easily be compressed:

const encoding = acceptsEncodings(request, "gzip", "deflate");
const body = encoding
  ? stream.pipeThrough(new CompressionStream(encoding))
  : stream;
const headers = new Headers({ "content-type": "text/event-stream" });
if (encoding) headers.append("content-encoding", encoding);
const response = new Response(body, { headers });
mmastrac commented 6 months ago

That would definitely make sense, but we currently don't have a good flushing strategy for streaming bodies. Ideally we'd like to ensure that buffered compressed data gets flushed after a short delay (or in the case of SSE, per-frame), but we cannot guarantee that compressed data will be flushed.

mfulton26 commented 6 months ago

Is the flushing issue around the serving having written an event terminated by multiple new lines but the CompressionStream, due to the way the gzip and/or deflate algorithm works, is awaiting more bytes to potentially better compress the outgoing data?

That makes more sense to me now that I type it out. 🤔

So, as it is right now maybe compressing an event stream isn't a good idea for servers because it could delay events being delivered to clients… I guess events should be small then and link to larger resources where necessary rather than inlining them.