facebook / react

The library for web and native user interfaces.
https://react.dev
MIT License
225.04k stars 45.89k forks source link

[Umbrella] Releasing Suspense #13206

Open acdlite opened 5 years ago

acdlite commented 5 years ago

Let's use this issue to track the remaining tasks for releasing Suspense to open source.

Last updated: March 24, 2022

Blog post: The Plan for React 18

Completed: React 16

Completed: React 18 Alpha

Completed: React 18

Features that may or may not appear in 18.x

React 18.x (post-18.0): Suspense for Data Fetching

All of the above changes are foundational architectural improvements to <Suspense>. They fill the gaps in the mechanism and make it deeply integrated with all parts of React (client and server). However, they don't prescribe a particular data fetching strategy. That will likely come after the 18.0 release, and we're hoping that to have something during the next 18.x minor releases.

This work will include:

TejasQ commented 5 years ago

Yikes! I missed that. I tried. 😅Let me take another look at it and get back to you. I must have missed something. 🤷‍♂️

~Update: I'm trying to figure it out here and we can collaborate if anyone's interested to do it together in realtime.~

Second update: @philipp-spiess and I have looked at it and I am genuinely stumped. I still don't understand it. At this point, I'm not sure if it's a bug since this is, in fact, an unstable_ and alpha feature, or if it's something that I am simply not seeing.

In either case, I feel like the core team will either have helpful answers, or be able to use your question to make React even better/more approachable.

Let's see what they have to say. 😄 Thanks for pointing this out, @nilshartmann!

Jarred-Sumner commented 5 years ago

Was this released as part of React v16.6? The blog post shows example code using Suspense:

import React, {lazy, Suspense} from 'react';
const OtherComponent = lazy(() => import('./OtherComponent'));

function MyComponent() (
  <Suspense fallback={<div>Loading...</div>}>
    <OtherComponent />
  </Suspense>
);
gaearon commented 5 years ago

Was this released as part of React v16.6?

Only the lazy loading use case, and only in sync mode. Concurrent mode is still WIP.

@nilshartmann

In the following example I would expect Title to be visible immediately, Spinner after 1000ms and UserData after ~2000ms (as "loading" the data for that component takes 2000ms).

I do think you're a bit confused about what maxDuration does. It's a new mental model but we haven't had time to document this yet. So it'll keep being confusing for a while until concurrent mode is in a stable release.

ghengeveld commented 5 years ago

Congrats on announcing the hooks proposal. I'd like to share something with the team. A while ago I released a component called React Async, which has features similar to Suspense. Essentially it handles Promise resolution, and provides metadata such as isLoading, startedAt and methods like reload and cancel, all with a declarative API (and a useAsync hook is on the way).

Now my main concern is how this will integrate with Suspense. For the most part I can probably use the Suspense APIs from React Async and provide users with the familiar and simple React Async API, while offering Suspense's scheduling features for free. For what I've seen, I genuinely think the React Async API is more sensible and approachable compared to the more abstract Suspense APIs. Essentially React Async tries to offer a more concrete API that works for a slightly smaller subset of use cases.

I was surprised to learn of the React Cache library. For React Async I deliberately did not include a cache mechanism but chose to deal with vanilla Promises. Adding caching on top of that is fairly easy.

Finally I'm concerned about accessing Suspense features from custom hooks. Suspense seems to rely heavily on several built-in components, which makes it impossible (I think?) to use these from a hook. Will there be Suspense hooks? Or is there some other way to integrate the two?

Dem0n13 commented 5 years ago

Hello. How can I test code with Suspense/Lazy? now renderer.create(...)toTree() throws "toTree() does not yet know how to handle nodes with tag=13"

MuYunyun commented 5 years ago

Why the props maxDuration in Suspense is only used in the Concurrent Mode rather than both of sync and concurrent mode. Can anyone help explain?

Jessidhia commented 5 years ago

(Right now) it means how long Concurrent Mode is allowed to leave this tree pending before forcing it to commit -- it effectively controls the time slicing deadline, and time slicing doesn't exist in Sync mode. Waiting before committing the tree would necessarily make the commit... not Sync.

punmechanic commented 5 years ago

I've been using Suspense in an internal application for data fetching and very quickly come across the reason why it is not meant to be used for data fetching yet.

Eventually, though, it is meant to be used for fetching data. Given that it seems unlikely the API is going to change significantly except for perhaps the cache provider, how is Suspense meant to work if you need to modify the data after you've fetched it?

As an example, here is a really awful hook from my application.

function useComponentList(id) {
  const incomingComponents = useSuspenseFetch(
    React.useCallback(() => getComponentAPI().listComponents(id), [id])
  )

  const map = React.useMemo(
    () =>
      Map(
        (incomingComponents || []).map(component => [component.id, component])
      ),
    [incomingComponents]
  )

  return useCacheValue(map)
}

This hook:

  1. Fetches data using the given callback from the given endpoint
  2. Transforms that data into an ImmutableJS Map - Since this is potentially expensive, I memoize the operation.
  3. Returns the map wrapped in useCacheValue, which is the particularly awkward bit.

useCacheValue looks like this:

export default function useCacheValue(value) {
  const [state, setState] = React.useState(value)
  React.useEffect(() => {
    setState(value)
  }, [value])

  return [state, setState]
}

with the idea being that it is a hook that will respond to value changing (which indicates that the data was refetched) but allows the user to modify the react apps representation of that state. In a way, it acts like a very bad cache (hence the name).

I'm struggling to see how this pattern works well with Redux in its current state. Has there been any discovery into how this might look when written by a programmer that is not me and when suspense is 'ready' for data fetching? As it stands, this is much more laborious than using Redux on its own with imperative fetching flags.

This probably gets a lot simpler once Redux has its own hooks since the main difficulty in making the two play together is that Redux uses a HOC with a context that is not meant to be exposed, but I'd still like to see what the official answer is :)

gaearon commented 5 years ago

Suspense is meant to work with an external cache (not a Hook driven by state). We’ll provide a reference implementation of such a cache that works for simple use cases. Relay will implement its own caching mechanism. Any other tools (such as Apollo) will be able to also implement their own cache that’s compatible, possibly getting inspired by these two implementations.

Mutation/invalidation isn’t the only question that needs answers. We also need to think about: when to show spinners, common patterns like “inline indicator” which may be outside the suspended tree, coordinating loading states (for things that need to unlock in a top-down order or come in as they’re ready), streaming rendering of lists, how this affects partial hydration, and so on. We’re working on these things but there’s no “official recommendation” on any of them yet. When there is, you’ll know it from the blog where we announce updates.

As a side note, Suspense for data fetching is a sufficiently different mental model than what people might be used to. I don’t think it’s fair to expect it’ll necessarily be as powerful when integrated with very unconstrained mechanisms like Redux. But we’ll see. It’s hard to say anything right now.

ntucker commented 5 years ago

@gaearon When you say "we're working on these things", is there an issue I can subscribe to or are these discussions happening in private?

punmechanic commented 5 years ago

Thanks, @gaearon :)

gaearon commented 5 years ago

@ntucker As always, you can watch ongoing activity as PRs. For example: https://github.com/facebook/react/pull/14717, https://github.com/facebook/react/pull/14884, https://github.com/facebook/react/pull/15061, https://github.com/facebook/react/pull/15151, https://github.com/facebook/react/pull/15272, https://github.com/facebook/react/pull/15358, https://github.com/facebook/react/pull/15367, and so on. We try to put some descriptive information into each PR, and you can see the behavior changes from tests. The documentation bar for experiments is low for several reasons.

We'll post more complete explanations about the chosen model after we're more confident that it actually works. I don't think it'll be productive either for us or for the community if we painstakingly describe every experiment in detail as it happens. Most of our first experiments fail, and documenting and explaining each and every one of them would slow our work to a crawl.

It is even worse that this often results in people building a mental model around something that we later realize doesn't work in the originally designed way. (Like it's happening with maxDuration which we just removed.) So we'd prefer to hold off sharing half-baked ideas until it's a good use of both your time and our time. This is consistent with how we developed React in the past too. When something is truly ready (even for a theoretical writeup of the mental model), we'll focus all our attention on documenting and explaining it.

dudo commented 5 years ago

As a side note, Suspense for data fetching is a sufficiently different mental model than what people might be used to. I don’t think it’s fair to expect it’ll necessarily be as powerful when integrated with very unconstrained mechanisms like Redux. But we’ll see. It’s hard to say anything right now.

@gaearon, fortunately, Suspense's mental model matches with my own perfectly. Very excited for that piece of the puzzle to fall into place. Thank you for all your hard work!

newtack commented 4 years ago

The roadmap announced last November (https://reactjs.org/blog/2018/11/27/react-16-roadmap.html) indicated that the "concurrent" version of Suspense was slated for Q2 2019. We are now well in Q3 2019. Is there an update we can get in terms of definitely not Q3, or maybe Q3, etc.?

fabb commented 4 years ago

This was the latest roadmap update I could find: https://reactjs.org/blog/2019/08/08/react-v16.9.0.html#an-update-to-the-roadmap

gaearon commented 4 years ago

We provided an experimental release back in October: https://reactjs.org/docs/concurrent-mode-intro.html. This is the same build we're running in production. There's still more work to do (both in terms of tweaking the API and building higher-level APIs) but you can start playing with it if you want.

iamstarkov commented 4 years ago

Suspense is killing me

mschipperheyn commented 4 years ago

@gaearon I understand that you use it in production. But I'm very reluctant to use "experimental" code in production. You guys are not being clear about the roadmap, status, progress, timings etc. Is this alpha, beta, RC quality? This term "experimental" says "don't touch this" to me.

We are all banking our businesses on this code and I'm sure we're all as swamped as you guys. A little clarity, a blog, SOMETHING would really help. It almost feels like "It's in production at Facebook, so we're done".

gaearon commented 4 years ago

@mschipperheyn

This is a multi-year project. The honest answer is that it spawned way more work than we thought when we started on it two years ago.

But the good news is that because we now heavily use it in production, the missing pieces are clear and we see the end of the tunnel. It's not theoretical — there is a finite set of things that we need to finish before we can comfortably say it's ready for broad adoption.

Here's a rough state of different workstreams today:

It almost feels like "It's in production at Facebook, so we're done".

I can see how it could look this way, although tbh this reading is a bit demoralizing. :-) We've been working non-stop on this for all the past months, and many of the technical aspects are either finished or close to being finished. Most of the remaining works falls into two categories:

In terms of "can you use today"... Technically, you can use all of this today. We do. Specifically, we use Relay Hooks and Concurrent Mode. We still have significant planned changes and known issues, so the current state isn't meeting the bar where we would consider it ready for broad adoption. Of course, if you don't mind having bugs or APIs changing right under your hands, you're welcome to use the @experimental releases just like we do.

I wouldn't say Facebook is in a special position here in terms of us "being done". Quite the opposite, we're not done — but internally, we're willing to tolerate churn and build on top of a moving train because that's how we know what we're building is solid. Without this heavy dogfooding, the flaws we discovered in six months would take several years to discover and redesign around.

To sum it up: there's more work to do.

mschipperheyn commented 4 years ago

@gaearon Thank you for this update! And I apologize if my tone was wrong. I admit I was a bit frustrated scouring Twitter for months and finding nothing. This aspect Server renderer immediately flushes Suspense fallbacks (available in experimental releases) looks like something I could allocate time on in testing this with Apollo Graphql for our implementation.

gaearon commented 4 years ago

Yep, it should be ready for library authors and curious people to start playing with. The missing pieces are mostly about providing "happy paths" but most of the plumbing should be there.

CrocoDillon commented 4 years ago

Server renderer immediately flushes Suspense fallbacks (available in experimental releases)

Where can I read about this? I was hoping Concurrent Mode API Reference (Experimental) but no luck.

If anyone has a demo stitching Next.js, Relay Hooks and Concurrent Mode together (with SSR) that would be awesome. Otherwise I might just try my luck if I can find sufficient documentation.

gaearon commented 4 years ago

@CrocoDillon

There’s no extra docs about SSR but it’s mostly because it’s just a new default behavior.

If you have an experimental release than any time a component suspends on the server, we flush the closest Suspense fallback instead. Then on the client you would use createRoot(node, { hydrate: true }).render(<App />).

Note this already enables all the new hydration features. So for example your Suspense boundaries will “attach” to the server generated fallback HTML and then attempt to client render.

Also note that you can start hydrating before your whole app loads. When <App> has loaded, you can hydrate. As long as components below suspend when their code isn’t ready (similar to lazy). What React would do in this case is keep the server HTML content but “attach” the Suspense boundary to it. When the child components load it will continue hydrating. The hydrated parts would become interactive and replay events.

gaearon commented 4 years ago

You can probably ask @devknoll for Next integration attempts/examples. He probably has some.

flybayer commented 4 years ago

You can enable concurrent mode in Next.js by installing react@experimental and react-dom@experimental and adding the following to next.config.js

// next.config.js
module.exports = {
  experimental: {
    reactMode: 'concurrent'
  }
}

Here's the Next.js discussion on this: https://github.com/zeit/next.js/discussions/10645

Alxandr commented 4 years ago

Is it possible to wait for any suspenses in server rendering (for cases such as static site gen for instance)? I agree that using callbacks is a good default, just wondering if it's overrideable?

robrichard commented 4 years ago

Also note that you can start hydrating before your whole app loads. When has loaded, you can hydrate. As long as components below suspend when their code isn’t ready (similar to lazy). What React would do in this case is keep the server HTML content but “attach” the Suspense boundary to it. When the child components load it will continue hydrating. The hydrated parts would become interactive and replay events.

@gaearon do you mean that you can render a component normally on the server, but use React.lazy on the client? Allowing you to return the full mark up from the server, but delay parsing and rendering of the component code on the client. The server rendered markup acts as the suspense fallback here?

gaearon commented 4 years ago

@robrichard I haven't actually tried it with React.lazy specifically (we use a different wrapper at FB and Next also has its own version) but I expect that this is how it would work. Worth verifying :-) With certain limitations — e.g. if you update its props and there's no memo bailout above, or if you update context above it, we'll have to remove it and show the fallback because we don't know what to do with it.

keshavmesta commented 3 years ago

@gaearon what's the current state Partial Hydration? I know #14717 was merged but I doubt it made into any release?

gaearon commented 3 years ago

It's been on in every @experimental release for a long time by now, as long as you use the unstable_createRoot API.

gaearon commented 3 years ago

Here's a demo: https://codesandbox.io/s/floral-worker-xwbwv?file=/src/index.js

maraisr commented 3 years ago

@gaearon are you able to please elaborate what you mean by "data driven dependencies"?

gaearon commented 3 years ago

@gaearon are you able to please elaborate what you mean by "data driven dependencies"?

Yes, of course. I must apologize for the brevity of the list above — it's very condensed and many of these are significant separate subprojects (which is why they're taking time).

Let me take a step back and give some broader context before answering your specific question. The broader context is that building a really good data fetching solution is really, really hard. Not just in the sense of the implementation but in the sense of the design. Typically, one would have to make a compromise between colocation (keeping the data requirements close to where the data is used) and efficiency (how early do we start loading the data, and can we avoid network waterfalls). This isn't as noticeable at a small scale, but as the number of components grows you really have to choose between great performance and easy-to-write code. In many cases, unfortunately, you don't get neither — which is why data fetching in general is such a hot topic.

We have a very high bar for what gets into "official" React. To be "Reacty", it has to compose as well as regular React components do. This means we can't in good faith recommend a solution that we don't believe would scale to thousands of components. We also can't recommend a solution that forces you to write code in a convoluted optimized way to keep it performant. At FB, we've learned a lot of lessons in this from the Relay team. We know not everybody can use GraphQL, or would want to use Relay (it's not by itself very popular and the team hasn't optimized it for external adoption). But we want to make sure that our data fetching solution for general React incorporates the hard-earned lessons from Relay and doesn't suffer from having to choose between performance and colocation.

I want to emphasize this isn't just about big apps. The problems are most noticeable in big apps, but small apps that import a bunch of components from npm also suffer from a subset of these inefficiencies. Such as shipping too much client-side code and loading it in a waterfall. Or loading too much upfront. Also, small apps don't stay small apps forever. We want a solution that works great for an app of any size, just like the React component model works the same way regardless of your app's complexity.

Now, to address your specific question. Relay has a feature called "data-driven dependencies". One way to think about it is an evolution of dynamic import. Dynamic import is not always efficient. If you want to load some code only when a condition is true (e.g. "is user logged in" or "does user have unread messages"), your only option is to trigger it lazily. But this means you are "kicking off" fetching (such as with React.lazy) only when something is used. That's actually too late. For example, if you have several levels of code-split components (or components waiting for data), the innermost one would only start loading after the one above it has loaded. This is inefficient and is a network "waterfall". Relay "data-driven dependencies" let you specify the modules you want to fetch as a part of the query. I.e. "if some condition is true, include this code chunk with the data response". This lets you load all the code-split chunks you're going to need as early as possible. You don't have to trade colocation for performance. This might seem like not a big deal but it shaved off literal seconds in the product code.

Now, again, we're not putting Relay into React, and we don't want to force people to use GraphQL. But conceptually the feature itself is generic, and having a good open source solution for it would let people do a lot more code splitting than is done today (and thus ship a lot less client JS code!) That generic feature won't be called "data driven dependencies" — that's just a Relay name I referred to. This feature will be a part of a larger recommended solution that doesn't require GraphQL. I just referred to it by that name in the list because this was a lot to explain for a single bullet list point.

I hope this clarifies it.

gaearon commented 3 years ago

We've made more progress on this.

https://reactjs.org/blog/2020/12/21/data-fetching-with-react-server-components.html

gaearon commented 3 years ago

It’ll soon be a year after https://github.com/facebook/react/issues/13206#issuecomment-614082019 so I want to provide a small update.

  • Compatibility solution for Flux-like libraries. (in progress @bvaughn, reactjs/rfcs#147)

There’s an initial version of this but we haven’t tested it widely yet. We’ll focus on this more after the initial React 18 release candidate is out.

  • Changing the priority model to a more sensible one. (in progress @acdlite, #18612)

This is done and fixed a dozen of bugs.

  • Only allow the last pending transition to finish. (in progress @acdlite)

The important part here is done. There’s a related fix for pending indicators that we’re going to punt on for the initial release because it’s less important but will fix in a follow-up.

  • Offscreen API (in progress @lunaruan)

This uncovered a need for a significant refactor that took us several months. The refactor is done now but we’ll not include the feature itself in the initial release.

  • Fire effects when hiding/showing content for Suspense

This is coming very soon. That’s one of the last significant changes to the model we want to make before stabilising. (It wouldn’t be possible to do without the above refactor.)

  • Show and hide Portal children when needed

Won’t do this. Instead the previous change lets you control it yourself.

  • Align with the ongoing standardization work for Scheduling (not started)

We’ve made a lot of progress here and the layering mostly makes sense now. Might be some other non-blocking work here.

  • Change event semantics. (in progress @sebmarkbage @trueadm)

This is mostly done.

  • Delegate to roots instead of document to enable more gradual adoption (in progress, @trueadm)

This was the sole focus of 17.

  • Flush discrete events in capture phase.

I’m not sure what this referred to but I think it’s either done or no longer needed.

  • Consider getting default priority from event to play nicer with imperative code.

We just did this.

  • Fix major known bugs (in progress @gaearon and @acdlite)

Changing the priority model mostly fixed those. Some of the remaining other ones disappeared after the refactor that was initially prompted by Offscreen work. Yet more disappeared after changing the event defaults. I’d say we’re good here.

  • Finalize other API semantics and defaults. (not started)

That’s one of the remaining areas to polish.

  • A solution for non-GraphQL use cases (in progress @sebmarkbage in collaboration with Next.js).

This is basically Server Components. Work in progress but doesn’t block the release.

  • A generic caching solution. (not started)

We have the first implementation but more API design and implementation tweaks might be needed. This is needed for Server Components too. But not the initial release necessarily.

Ecosystem compatibility and good defaults. It doesn't help if we release something that nobody can use today because it doesn't work with their libraries or existing approaches.

This is active ongoing work.

Implement a streaming server renderer like the one in @acdlite's ZEIT talk

The client hydration support for this was done well over a year ago. The actual streaming SSR development has now started and is in active development. https://github.com/facebook/react/pull/20970

We see the light at the end of the tunnel. This time it’s closer. 🙂 Hang on.

Andarist commented 3 years ago

Fire effects when hiding/showing content for Suspense

This is coming very soon. That’s one of the last significant changes to the model we want to make before stabilising. (It wouldn’t be possible to do without the above refactor.)

Just to confirm - is this the one about cleaning up effects in hidden trees and refiring them when the content gets shown again? Is there a specific plan for how you will roll out this? I suppose this will come behind a flag at first but what's next? Is there a plan to just enable this in React 18?

Do you have any plan/guidance for effects that should be tied to the overall lifecycle of the component? Like for example, it might not make sense to fire some network requests when showing a component again. I guess that the most common answer is to hoist such request but since the Suspense and "suspender" component can be far away from each other so the components in the middle might not easily know about this.

bvaughn commented 3 years ago

Just to confirm - is this the one about cleaning up effects in hidden trees and refiring them when the content gets shown again?

@Andarist Yes. I plan to post a PR with an initial implementation of this (for Suspense and layout effects only) today.

Do you have any plan/guidance for effects that should be tied to the overall lifecycle of the component?

We will provide guidance in the form of documentation and coding examples in the coming weeks. We began rolling out these new effects semantics internally within Facebook to test the concept several weeks ago.

Andarist commented 3 years ago

@bvaughn Great, gonna be looking forward to this PR and discussion/documentation 👍

gaearon commented 3 years ago

Ecosystem compatibility and good defaults. It doesn't help if we release something that nobody can use today because it doesn't work with their libraries or existing approaches.

This is active ongoing work.

I should probably expand a bit more on this.

We don't have large "unknown unknowns" anymore. So the remaining work is mostly about preparing the ecosystem for migration. You might have noticed that we pay attention to incremental migration on the React team. So we want to get this right. Incremental migration, including finding the right tradeoff, writing migration guides, and working with the ecosystem library community, are just as important as implementing new features.

Originally the work on Concurrent Mode started with theory. We knew that it makes sense in principle. For example, that you shouldn't continue doing a computation when it is no longer needed (https://github.com/facebook/react/issues/17185#issuecomment-805291391). We started with this one use case but it turns out the same constraints unlock a whole range of features. For example, we didn't plan Suspense at all in the beginning. Suspense came out of a streaming server renderer exploration. But it turned out that Concurrent Mode solves the issues with streaming server rendering too. (Such as letting us start hydration before all the client code is loaded.) We didn't plan Server Components back then, but it turns out Concurrent Mode solves issues there too. (Such as letting us render them while they stream in instead of waiting for the whole response.) This is what happens when you start with a solid theory. As much as this metaphor might seem preposterous in the context of front-end development, it's a lot like math. If the principle is right, concrete applications for that principle will keep coming up. Even the ones you didn't anticipate in the beginning.

Building it went in full swing around the same time as FB started a website rewrite. We latched onto the opportunity to dogfood it and stress-test it there. It was a wild ride, since React is the foundation on top of which a lot of other infra and product code is built. And of course our initial guess had many mistaken assumptions and flaws in it. Not many people realize that Concurrent Mode was controversial at FB. In the beginning of the website rewrite, many issues were attributed to it — some correctly and some not. We kept iterating and fixing both the API and the implementation. It was a lot of hard work. There was an inflection point about a year and a half ago when people no longer wanted to remove it. It started delivering wins. The flawed APIs were replaced, and the product engineers weren't confused anymore. Implementation quirks and bugs were squashed, so the stability was not a concern anymore. We added more missing puzzle pieces (and are still adding them), so the whole picture started to take shape and people could see why all of this effort was worth it. Sure, maybe there was a less risky way to prove out a technology than to rebuild a website on it before it's really "ready", but it helped us catch all the flaws that we'd have to spend many years catching if we were to make it stable in open source at that time.

We know that it works now. In fact, removing it would be a huge regression. But not everyone can rewrite their website. And we can't either — our website is only a small part of our web-based code. In addition to Ads (which drives the revenue), there's a myriad internal tools that would never be rewritten. So over the past six months, we have shifted our focus from putting out fires and fixing flaws (which we're running out of) to working on the incremental migration strategy. An incremental migration strategy encompasses many aspects. It means removing implementation complexity where it wasn't warranted if it leads to simpler behavior. It means giving you the lever to opt into new behavior instead of turning it on by default. (For example, we are making most kinds of updates synchronous by default, negating many concerns about concurrency, unless you use a particular API that opts into concurrency where you want it.) It means writing migration guides. It means surveying the ecosystem libraries for patterns that might break, talking to their maintainers, and figuring out the adoption strategy. It means polishing the release candidate so that we can rotate fully to ecosystem support. All of this is important work that takes time.

That work is most of what we're busy with these days. Removing, simplifying, streamlining, and preparing.

gaearon commented 3 years ago

By the way. If someone discovers this thread from some Twitter argument, I want to be explicit that we've made a decision to not participate in those. These arguments are draining, don't lead to anything productive, tend to pop up on weekends, are ruining our work-life balance, tend to be ill-informed, don't afford nuance or context, and are largely a waste of time for all parties involved. We've never shied away from explaining our position in a longer-form medium such as here on GitHub, and I hope this space stays healthy that we can continue doing so.

There is definitely an appetite for more frequent updates on the state of our research before it hits stable. Server Components announcement was a pilot, and given its warm reception, we have a few more things in the pipeline to address that appetite.

But if we have to choose between talking about our guesses and pontificating, versus working on things that actually bring the stable version closer (including all the ongoing work to enable incremental adoption), we've learned enough to choose the latter.

dfabulich commented 3 years ago

In the Server Components FAQ, https://github.com/reactjs/rfcs/blob/2b3ab544f46f74b9035d7768c143dc2efbacedb6/text/0000-server-components.md#why-not-use-asyncawait the React team said that they would prioritize an RFC for Suspense in early 2021, documenting why Suspense-compatible data-fetching APIs have to throw Promise objects in order to suspend, instead of using standard async/await patterns.

I continue to think that there's no way to write that RFC: you'll find that the argument for throwing Promises falls apart when you try to explain it.

This is the big remaining "known unknown" in Suspense's API and design. I'm especially worried to hear y'all say "We don't have large "unknown unknowns" anymore" when IMO it's inevitable that this design will require significant changes during the RFC process.

Andarist commented 3 years ago

I continue to think that there's no way to write that RFC: you'll find that the argument for throwing Promises falls apart when you try to explain it.

Given this design has already years and they had to discuss this extensively between each other and with other people inside and outside of Facebook I find such a statement rather ill-placed. Do you really believe that a team of those engineers can't explain their choices? You/we/whoever don't have to agree on any particular choice taken but to state that the choice is so bad that it can't even be explained is derogatory to the other party.

One of the reasons that immediately comes to my mind is that with async/await you have to opt-into asynchronicity and you can't make a synchronous call so scheduling things becomes much harder and often less predictable. For instance, redux-saga in the past had a Promise-based scheduler but it was that hard to maintain it without bugs that it has been rewritten to a simpler one that is using callback-based continuations, allowing for synchronous resuming. So I totally understand the design choice here - even if it looks a little bit funky at first.

gaearon commented 3 years ago

A brief answer is that we needed a pattern that can work both on the server and the client without writing code differently — so that it works e.g. in Shared Components. Async/await inside components gets pretty bad on the client for multiple reasons, including (1) unnecessary waterfall of ticks in the most common — already resolved — case, (2) it does not solve caching so you have to add a caching layer anyway, (3) continuations will be obsolete by the time they run since props/state often would have changed. I'd rather not derail this thread by going in depth on this topic yet (you're right we still need to write it up, although right now we're not yet focusing on this).

dfabulich commented 3 years ago

I certainly agree that this thread isn't the place to hash out the Suspense RFC and async/await issues, but I would like to constructively request that publishing and discussing the Suspense RFC be a priority.

This issue is about releasing Suspense, and the major prerequisites for releasing Suspense. Discussing Suspense with the public in an RFC, making sure you're actually shipping the right thing, is (or should be) a major prerequisite for releasing Suspense.

As it stands, the team is focused on releasing Suspense, but "not yet focusing" on discussing Suspense in an RFC, as if the RFC discussion has very little to do with releasing Suspense.

I'm very concerned that y'all think there's probably little to be gained from public feedback, so the RFC can be put off until the end, when Suspense is already basically done and ready to ship. But pushing the RFC off until later will just make it harder to change direction if/when that becomes necessary.

gaearon commented 3 years ago

I think it’s fair to say that having used this mechanism for the last two years heavily in production, we feel rather confident about it and understand its tradeoffs well. This doesn’t mean feedback won’t be welcome, but we need to be practical with sequencing how we spend our time. The exact API isn’t central to the paradigm, but the constraints that led us to it are, and any alternative solution would have to satisfy the same constraints. In terms of a concrete solution, if there is a better one that satisfies them, we could replace it — this is the edge of the system so it’s not difficult aside from figuring out a migration path for things already relying on it. However, if the constraints themselves are something we disagree on, or the desired feature set, it’s a difference that an RFC would be unlikely to resolve. If your concern is that the more time we spend missing some fundamental flaw, the harder it would be to turn around, I think now that we’re in a final stretch of work, there is not much difference a delay of a few months would make. That said, I’ll discuss this with the team to see if we can prioritise writing it.

gaearon commented 3 years ago

Today, we're announcing React Labs — a new video series with technical deep dives with the React team members. Our first video is a Server Components Architecture Q&A deep dive. We hope you enjoy it!

gaearon commented 3 years ago

Fire effects when hiding/showing content for Suspense

This is coming very soon. That’s one of the last significant changes to the model we want to make before stabilising. (It wouldn’t be possible to do without the above refactor.)

This landed in https://github.com/facebook/react/pull/21079. We will be testing this internally in the coming weeks.

Compatibility solution for Flux-like libraries. (in progress @bvaughn, reactjs/rfcs#147)

There’s an initial version of this but we haven’t tested it widely yet. We’ll focus on this more after the initial React 18 release candidate is out.

The next item on the list makes it less urgent, by the way:

Ecosystem compatibility and good defaults. It doesn't help if we release something that nobody can use today because it doesn't work with their libraries or existing approaches.

This is active ongoing work.

https://github.com/facebook/react/pull/21072 makes Time Slicing opt-in, which means that if you don't use new features like useTransition or useDeferredValue or <Suspense> refetches, even mutable Flux-like solutions continue to work the way they're written today. We'd still want to offer more first-class support to state management libraries that want to take advantage of the new features, but this change should help resolve a lot of past concerns about the initial upgrade. This has not landed yet, but hopefully this week.

Implement a streaming server renderer like the one in @acdlite's ZEIT talk

The client hydration support for this was done well over a year ago. The actual streaming SSR development has now started and is in active development. #20970

We're hoping to reach feature parity with the existing server renderer soon. https://github.com/facebook/react/pull/21153 landed a few days ago, and there's more coming.

samcooke98 commented 3 years ago

With regard to:

Ecosystem compatibility and good defaults. It doesn't help if we release something that nobody can use today because it doesn't work with their libraries or existing approaches.

Is there anything the community can do to help?

At the company I work at, we're heavy users of MobX, and so we would be keen to help figure out how Concurrent Mode \ Time slicing can work with MobX. I imagine there are similar companies out there as well.

gaearon commented 3 years ago

MobX is a tricky case because it diverges from our direction. We want it to work but we can't make it benefit from features built on the opposing principle.

Like, there's no amount of magic that can make an approach fundamentally taking advantage of immutability ("we can render two versions of state independently") fully compatible with an approach fundamentally built on mutability ("changing this property immediately propagates"). I think the most we can count on here is not breaking the apps using it, which we'll be trying our best to do during the community rollout phase. But there's nothing we can do to make it benefit from new features that are built on (and enabled by!) the very opposite principle. That's the nature of picking a tradeoff.

That said, in practice it depends on how much mutation is happening and at what stage. E.g. Relay happens to rely on mutation in its implementation details today, and although technically it's breaking the rules, it works fine at scale because its updates aren't super common (mostly network responses), and we have some fallback mechanisms to recover from failures. We'd like to offer some APIs for mutable stores to integrate better with React, first with deopts like useMutableSource, but maybe later with some more first-class alternative mutable pattern (for example, databases solve this with MVCC) which could work just as well as immutability. I don't know at this point how it would look like or if MobX could take advantage of that to support new features well.

gaearon commented 3 years ago

As for a broader question, there is definitely a lot the community can do to help once we figure out the full upgrade story. I realize there's been a lot of annoyance with us not publishing some kind of "upgrade guides" earlier, but the whole point is we don't know the full story yet and we're still changing it so that it makes more sense. We don't want to churn people without a really good reason. I'm literally writing an internal guide now that we'll be using to migrate legacy surfaces to Concurrent roots, even the ones relying on Flux and mutation. We'll see if that goes well. Once we try those out, we'll use them as a basis for the open source guides. We hope to publish this information together with a release candidate, at which point we'll rotate fully to supporting open source libraries and helping them overcome any difficulties and common issues with migration. At that stage, we expect to gather some common recipes, and we'll definitely welcome (and need!) the community's help to propagate the knowledge and common solutions through the ecosystem. Although, like I said, a bulk of our current work is to make as much as possible just work out of the box — at least until you start using new features. Then you get to decide whether these new features are worth it, both as an application user and as a library author who's considering whether to invest some time in following the rules better.