Open mfulton26 opened 6 months ago
That would definitely make sense, but we currently don't have a good flushing strategy for streaming bodies. Ideally we'd like to ensure that buffered compressed data gets flushed after a short delay (or in the case of SSE, per-frame), but we cannot guarantee that compressed data will be flushed.
Is the flushing issue around the serving having written an event terminated by multiple new lines but the CompressionStream
, due to the way the gzip and/or deflate algorithm works, is awaiting more bytes to potentially better compress the outgoing data?
That makes more sense to me now that I type it out. 🤔
So, as it is right now maybe compressing an event stream isn't a good idea for servers because it could delay events being delivered to clients… I guess events should be small then and link to larger resources where necessary rather than inlining them.
I figured out that I can compress an event stream myself using
CompressionStream
but I wonder if event streams can be candidates for automatic body compression?gzip
anddeflate
could be supported fortext/event-stream
CompressionStream
(https://github.com/jshttp/mime-db/pull/138)I think it is fine to require stream authors to compress their streams themselves if automatic body compression isn't appropriate here. If that's the case then I wonder if any documentation/references add to the manual about automatic body compression would be helpful to call out that streams are not eligible for automatic compression but can easily be compressed: