Open rijk opened 9 months ago
However, because
<Preview>
is a client component, that means everything in<Content>
will be rendered on the client as well. This makes it impossible to use any server components inside your portable text.
This is unfortunately a known limitation of RSC as of its current design. The live preview lives in the browser and achieves low latency previews by using postMessage
to send updates from the Sanity Studio over to your app iframe. For this to work you'll need your preview logic to all be client components, and for portable text this means you'll need two separate component trees:
useQuery
instead of Suspense boundaries and loadQuery
pairings.Once React v19 comes out we'll be able to give you suspense boundaries with useQuery
and make it easier to reuse code in the two trees. But unless the design of RSC changes there will be no way to magically grab a server component and render it client side, nor for a client component to import a server component and have it work seamlessly.
We've spent months and months of engineering on this to find alternative paths but they had worse drawbacks than it is to maintain 2 render trees, one for production, the other for previews.
We're keeping a close eye on Next v14 canaries and the upcoming release of React v19 hoping it'll give us better tools to build live previews.
Yeah, I thought it was a pretty fundamental issue. What about offering a slightly-higher-latency option that, instead of initiating a client side listener + renderer, just revalidates some backend query leading to a rerendering of a server component (e.g. <Page>
in the example above)? This would leave diffing + reconciliation to React which would still prevent unnecessary rerenders, and since the whole RSC payload for the blocks is rendered on the server, you're free to e.g. load data right in your block components. I am sure that is an option you considered, but am curious for the drawbacks you found.
We are indeed working on offering a higher latency option. We've performance profiled two main variants:
revalidateTag
from a server actionrouter.refresh()
and hitting our API directly with no Cache layers in-betweenSo far it looks like revalidateTag
based approaches isn't able to ask just the RSC's that are affected by a change to send new payloads. What happens instead is that the entire tree is re-rendered, and the cache tags only affects what data fetches read from the Vercel Data Cache, or do the round trip of hitting our API, then writing to the Vercel Data Cache, before returning the payload.
We expected revalidateTag
to outperform router.refresh()
but our testing shows that's not the case as of today.
This will change in the future as Vercel starts to explore other avenues of optimising RSC use cases after they land Partial Pre-Render.
The above tests uses unstable_emitExperimentalRevalidateTagsLoaderEvent
to opt-in Presentation to emit the needed events. You can experiment with this yourself if you wish to, but keep in mind that the stable API we're working on will look quite different so don't rely on it in production :)
So far it looks like revalidateTag based approaches isn't able to ask just the RSC's that are affected by a change to send new payloads.
Hmm strange, I'm looking into the tag-based experiment because that is how I would expect it to work (and also know it to work from experience). So maybe there's something going wrong there. Not sure what you mean with "the RSC's that are affected" though.
Looking at the code, I think the revalidateTag
call here:
will automatically trigger a refetch for any loadQuery
that has this previewDrafts:…
tag:
which for example could be this BlogPostPage
:
which yes would rerender that component, but that's fine IMO. Is this not working properly, or is this what you mean with "What happens instead is that the entire tree is re-rendered"?
and the cache tags only affects what data fetches read from the Vercel Data Cache, or do the round trip of hitting our API, then writing to the Vercel Data Cache, before returning the payload.
I don't know, maybe there is some more internal caching of responses going on then. I just did a test in my app, I've added an action that revalidates my translations
tag:
export async function revalidateTranslations() {
revalidateTag('translations')
}
And when I call that from the client:
<button onClick={() => revalidateTranslations()}>Revalidate</button>
What happens is:
It's really seamless and quite fast as well, with React handling all the complexity involved. So it still feels to me like it would be a great match for this functionality.
In the below screen recording you can see it in action:
https://github.com/sanity-io/visual-editing/assets/159500/d54b7a11-6d55-46ec-818f-45934f27be96
If you look at the RSC stream that sends the updated RSC render tree you'll see it sends the full tree every time. Whether you use a granular revalidateTag
or a revalidatePath('/', 'layout')
only affects the data revalidation and what fetches goes all the way to origin or not. RSCs rerender every time and the only difference is whether they read from cache or if they go to origin and then write to the cache.
That doesn't mean it's going to stay this way, it feels like it's just an opportunity Vercel has to optimize in the future.
In any case we have an API you can use your test and find which strategy works best for your case.
npm install --save-exact next-sanity@canary @sanity/presentation@latest @sanity/visual-editing@latest
The new API lets you put a server action to the new history prop:
// app/layout.tsx
import { draftMode } from "next/headers"
import { VisualEditing } from "next-sanity"
export default function RootLayout(props) {
return (
<html>
<body>
{props.children}
{draftMode().isEnabled && (
<VisualEditing refresh={async payload => {
'use server'
// use the payload to call revalidatePath or revalidateTag as you like
}} />
)}
</body>
</html>
)
}
If you give it a try please do share your experience with it and how it performs 😄
Thanks, I'll give it a shot. Do you have tips for visualizing the RSC stream? The Chrome Network tab just shows an error for those fetches.
@rijk any update? We are facing the same issue.
@stipsan Has the official recommendation here changed at all? I think this might be part of what is causing my editor experience to be pretty sluggish. Trying to make sure my implementation is following best practices, but pretty hard to tell what those are at the moment.
@stipsan Has the official recommendation here changed at all? I think this might be part of what is causing my editor experience to be pretty sluggish. Trying to make sure my implementation is following best practices, but pretty hard to tell what those are at the moment.
The official recommendation is to use the simple setup when on RSC, for example like this official template: https://github.com/vercel/next.js/tree/canary/examples/cms-sanity A known trade-off is that you’ll get seconds of latency between typing into a field, and to seeing the preview reflect the latest content you entered. This is a result of several round trips:
client.fetch
again, and we have to wait for it to resolve before the RSC is done.It’s possible to eliminate the latency introduced by these round trips, and we do maintain an official RSC example that implements it, by using @sanity/react-loader
: https://github.com/sanity-io/template-nextjs-personal-website
However we don’t currently recommend this pattern, if it can be avoided, as it comes with a number of trade-offs. Dramatically complicating userland architecture the most pressing one. The TL;DR is that RSC were never really designed to dynamically chose wether to do all its data fetching on the server (ideal for production), while in a live preview context let them become client components that can receive events with changed data and then with low latency rerender your app on content changes that can all happen in the browser, instead of needing long server-client round trips.
In other words, what we, as Sanity, want is an API for production that works like next dev
does when you edit a component, only that we want it with data fetching instead. Until Next.js offers a first class primitive for Hot Content Reloading we're left with trying to find the least terrible option 😅
These are the two options we recommend, there are others but they rely on hacks and undocumented behavior. We’re currently exploring a third option but it’s still early and we don’t yet know if it’ll pan out.
React Server Components with component level data fetching would be a great match for Portable Text modules with additional data requirements. For example:
module.products
that calls the Storefront API to fetch translations and additional product metadatamodule.reviews
that takes in a Product ID and fetches reviews or its review score from an external sourceBy nature, it's impossible to know what data is needed beforehand to render a block of portable text. That's why the component level data fetching would work great, with RSC preventing client-server waterfalls.
The problem
Visual Editing requires
The way you would implement it is to do a switch like this:
Where the
<Preview>
component is just a client component that wraps the same<Content>
and re-loads the data when it changes:Looks pretty innocent, right? However, because
<Preview>
is a client component, that means everything in<Content>
will be rendered on the client as well. This makes it impossible to use any server components inside your portable text.Current workaround
The only way currently to get the data in for a block of dynamic portable text, is to make sure it's first all synced into Sanity (which is very laborious, involving webhooks, custom sync handlers etc), and then manually expanding all those references in your query (also tedious and leading to other issues).
Sidenote: I found this PR but I couldn't figure out what it does.