Closed phryneas closed 1 year ago
@patrick91
what are the cases in which you'd use defer if doing SSR?
Generally, all your client components will run on the server in one SSR pass if you use the app
folder.
So if you use @defer
and the app
folder, you use "@defer
with SSR".
So this tries to give a few different for this to "make sense" - either by removing all @defer
ed stuff during SSR, and letting that happen on the client (where a fetchPolicy
has to make sure that the requests will run again!), or by setting an "upper time bound" - if the deferred fields return fast, they already are taken into account on the server. If not, they won't block the requests forever and the client can later take over.
@patrick91
what are the cases in which you'd use defer if doing SSR?
Generally, all your client components will run on the server in one SSR pass if you use the
app
folder.So if you use
@defer
and theapp
folder, you use "@defer
with SSR".
Ah yes, man I was still thinking about RSC :D
Yeah, RSCs really don't add the complexity we were thinking in the beginning - it's all in the streaming SSR now 😅
I have ported the "Hack the Supergraph" demo over to the next13 app directory and added that to this PR, so we can use that to experiment with @defer
in SSR.
Discussed in person about the expected behavior and the use case of the maxDelay
.
My initial understanding was stuck at the "why would we want this in the first place since streaming partial data feels incorrect". Thus my gut reaction was to disallow @defer
and @stream
on the server.
After being shown a demo, I now see the value in this feature. The client will make a request if it determines it is not able to fulfill the data requirements from the query from the server render. This allows the component to ensure it has the full data set. This strikes a nice balance between the benefits SSR (faster time to render), but without the downsides of potentially slow fields/queries. The maxDelay
allows a developer to execute as much of the query on the server as possible, but allows it to put limits here so that the client can still receive a fast render.
In our messaging I think we just need to describe the balance being struck with maxDelay
and the use case it fulfills. It's also necessary to mention that the client will make an additional request to fulfill the rest of the query, so a user isn't just left with partial data with no way to fulfill the rest (might also be good to mention that this is only affected by queries with @defer
and @stream
). Yes this means that the client will "refetch" some of the data that was already fetched on the server, but this is the tradeoff you make as a developer. In this way, I think of maxDelay
as a lever. On one end, you have the guarantee of data completeness, albeit at the cost of a (potentially) slow query. On the other end, you can render faster at the cost of completeness and a (potentially) extra client request. I believe framing in this way can help a developer decide what lever to pull.
We also discussed the possibility of renaming maxDelay
to something more in line with its nature. This is really about the wait time between the first chunk of a deferred query (deferred encompassing @defer
and @stream
usage) and the time the data is flushed to the client. Perhaps we use a name with "defer" or "flush" or something in the name to reflect this.
Some ideas:
flushDeferredTimeout
flushTimeout
flushWaitMs
deferredWaitTimeout
Note: I'm not in love with any of these. These are mostly to get the brain thinking about additional possibilities.
We could perhaps consider adding a Ms
suffix to the end of this as well to make it clear that this accepts milliseconds as its unit (e.g. maxDelayMs
)
I renamed the DebounceMultipartResponsesLink
into AccumulateMultipartResponsesLink
and renamed the maxDelay
option to cutoffDelay
. I also made it clear that we are talking milliseconds in the docblock, so it should show up in IDEs.
@jerelmiller do you think that helps communicate the use case better?
I'm gonna merge this in as it is at this point. Further discussion is very welcome, but I'm trying to prevent a net of pull requests here, and I think this is at a pretty good stage.
This lays the foundation for a multipart request story that should also work in SSR scenarios.
Any comments are very welcome!
For nicer reading, see the rendered README