Open steve-marmalade opened 4 months ago
@ztanner , tagging you in the hopes that I've misconfigured something in a way that will be easy for you to spot :pray:. This one is kind of a big deal for us because I believe it's increasing our server costs significantly (since we're paying to do the same work many times when a user navigates to a page).
Do you do any api fetch requests on the server side? I've noticed this behaviour on parallel routes where I rely on session and other fetched data. Where It builds the endpoint in dev, comes back with the data and rebuilds the parallel page. over and over again.
Hi @steve-marmalade --
Thanks for the great repro and description, as always 😄
As a quick workaround for the issue you're describing here, there're a few options:
prefetch={false}
to the linksapp/[letter]/loading.tsx
and implement loading behavior with a regular <Suspense>
boundaryNow, for an explanation of why it's happening: prefetch requests will return RSC data up to the nearest loading boundary. This is so that a prefetch doesn't lead to overfetching data, especially if you're linking to a page that has an expensive subtree. When a route is prefetched, the idea is that clicking the link will immediately show the prefetched loading boundary, while the part that suspended is streamed in. In this case, since there's a loading.tsx in the [letter]
segment, upon navigation the router thinks that all of the parallel slots are missing data (because the server sent null
data for them), triggering data fetches for them all.
I think I see a way to optimize this, and will work on a PR this coming week. Thanks for the heads up!
Thank you for the detailed description and the workaround (which does seem to alleviate the problem!)
Just want to clarify one thing for my own understanding:
the router thinks that all of the parallel slots are missing data (because the server sent null data for them), triggering data fetches for them all.
Even in this case where the router triggers a data fetch for each slot, I still wouldn't expect those requests to take ~1 second each in my demo, because most of the slots are just returning null
from default.tsx
. Put another way, why does triggering a fetch for /app/[letter]/@grid2/2/page.tsx
entail running app/[letter]/page.tsx
and app/[letter]/@grid/page.tsx
?
Right now data is fetched per-page, rather than having the ability to more granularly fetch per-segment/slot. This is something we're planning to address soon so that the Next.js server can be smarter about only requesting the data it needs, rather than returning more data than necessary.
At the moment, when the router sees missing data for the @grid3/default
segment, it triggers a fetch for the page it's currently on to retrieve that data (ie, fetch("/a")
). The common layout won't be refetched, but everything in-between will. This means that it'll need to wait on any slower requests that also get matched in that request, since it's not currently possible to say "only fetch me the data for /app/[letter]/@grid3/default.tsx
, but not /app/[letter]/page.tsx
".
Heya @ztanner , I know there's a ton going on -- any chance this fix makes it into Canary this week?
Hey @steve-marmalade -- I talked with some folks on the team and the concern is that, while many people use default.tsx
to return null
, you could just as easily have default.tsx
return an expensive subtree, which would go against the reason we only return up to the loading segment for prefetches in the first place.
I'll need to think through if there's an alternate approach we could do here, which may take a little bit more time. 🙏
Got it, thanks for the update.
Relatedly: would you expect that scrolling to an id
on the current page would also trigger the fetch requests for each slot (regardless of whether loading.tsx
is present)? E.g. <Link href="#section-a">Section A</Link>
. I guess I didn't expect a Link with only a hash ID to make any fetch requests to the server. For now, I can replace with javascript scrollIntoView
, but figured I'd raise in case it's unexpected to you as well.
Link to the code that reproduces this issue
https://github.com/marmalade-labs/parallel-route-rerender
To Reproduce
bun run build && bun start
) or navigate to https://parallel-route-rerender.vercel.app/aCurrent vs. Expected behavior
Current Behavior If you open the Network tab in DevTools, when you click a card you will see 21 fetches for the current route (with different
?_rsc=
params), which I believe correspond to the 21 different parallel slots in the layout.Maybe this is intended in the sense that each request might correspond to a specific slot. However, each request appears to be re-running the matched slots (which will always and only be
/app/[letter]/page.tsx
and/app/[letter]/@grid/page.tsx
). I demonstrate by adding a 1 second sleep in those pages, but not in any others (which should just be rendering a no-opdefault.tsx
), and yet we can see that each of the 21 requests takes > 1 second.Expected Behavior Whether or not we expect 21 different HTTP requests, I would only expect that each slot is rendered once. So we should either see 1 request that takes ~ 1 second, or 21 requests where 2 of them take 1 second and the rest are <100 ms.
Provide environment information
Which area(s) are affected? (Select all that apply)
Navigation, Parallel & Intercepting Routes
Which stage(s) are affected? (Select all that apply)
next start (local), Vercel (Deployed)
Additional context
No response