sysid / sse-starlette

BSD 3-Clause "New" or "Revised" License
545 stars 37 forks source link

Allow cache control header override #34

Closed gagantrivedi closed 2 years ago

gagantrivedi commented 2 years ago

Hi, Firstly, thanks for the amazing work. I am trying to use fastly to fan out one steam to multiple clients, but for that to work the response must be cacheable.

We do request collapsing automatically (unless you turn it off), but for requests to be collapsed, the origin response must be cacheable and still 'fresh' at the time of the new request. You don’t actually want us to cache the event stream after it ends; if we did, future requests to join the stream would just get an instant response containing a batch of events that happened over some earlier period in time. But you do want us to buffer the response as well as streaming it out, so that a cache record exists for new clients to join onto. That means your time to live (TTL) for the stream response must be the same duration as you intend to stream for. Say your server is configured to serve streams in 30-second segments (the browser reconnects after each segment ends): the response TTL of the stream should be exactly 30 seconds (or 29, if you want to cover the possibility of clock-mis-syncs): Ref: https://www.fastly.com/blog/server-sent-events-fastly.

But it looks like we don't allow certain headers to be overwritten


        # mandatory for servers-sent events headers
        _headers["Cache-Control"] = "no-cache"
        _headers["Connection"] = "keep-alive"
        _headers["X-Accel-Buffering"] = "no"

I am more than happy to shoot a pull request if that sounds good to you?

sysid commented 2 years ago

@gagantrivedi, thanks for this suggestion. I am happy to accept a merge request for this. However, please make sure that the README clearly documents this special use-case for Cache-Control. For "simple" use cases no-cache is required and most users might be confused.