Closed parisholley closed 3 years ago
Seems to work correctly: https://codesandbox.io/s/musing-raman-p5ep3?file=/src/components/Todos.js
Are you using the deprecated suspenseExchange
by any chance? It would help us if you followed the issue-template and included a repro, ...
Do you mean that render
gets called twice, if so that's expected. State can get updated due to the suspense cache and normal one.
This is on a fresh cache, and not using extra exchange, just suspense: true.
https://codesandbox.io/s/mystifying-thompson-1r7lc?file=/src/components/Todos.js
See where i added console.log, i would not expect that to trigger twice
The "state" (res
) in this case, is not different between renders. My expectation is by using suspense mode, I eliminate having to watch for res.fetching, and have a single render cycle with res.data
or res.error
. If I have to wrap res in a memo, it defeats the purpose of suspense?
This has been answered before in https://github.com/FormidableLabs/urql/issues/1085
Another good explanation about this: https://twitter.com/tannerlinsley/status/1253000896776073221?s=20
Rerendering isn't taxing it guarantees correctness
I'm not arguing about the expense of re-rendering, i'm talking about the semantics of how suspense is suppose to behave. furthermore, the memo example you gave is still broken, you can't memo off resp, test by moving my render console log inside, it is still invoked twice
that being said, res.data can be memo'd, but i'd still argue this isn't intuitive behavior. other libs such as apollo client, only re-render to cache changes or initial fetches, cache-and-network
, polls, manual refetches, etc...
This is conforming to Suspense semantics, we render the first time this source becomes a cached source that throws the promise upwards to your boundary. When this promise resolves this subtree gets reenabled and we rerender with the cached source, now we dispose this cached source and subscribe to the event-stream.
The purpose of Suspense has been achieved, it has handled the loading state for you and we've rerendered with your expected UI. Noone will actually feel a downside to this second render, which is the subscription to the event-stream, since it doesn't touch any dom.
It would be a lot more expensive for libraries to deep-diff your data every time than to just let React do its thing.
Maybe I'm not groking something, but take the example React gives for the data fetch variation of suspense:
https://codesandbox.io/s/frosty-cdn-7vmvd?file=/src/index.js
What i'm describing is no different than how that code sandbox behaves... I only expect the console logs to fire once.
The reason why this is important, is while in most cases I can assume that resp.data will be a JSON object, I as the consumer, don't actually know that (whether it is urql or any other library I do not control). What if it is a proxy? What if memo can't actually tell the object changed because the underlaying data is hiding via a proxy? Do I hope that the object returns some type of hash code I can use? This implementation assumes that I can effectively handle a diff or have access to a unique key for memo'ing vs relying on common rendering semantics.
What you're seeing is an artifact of concurrent mode and reactive state. Essentially the mechanism is similar to useSubscription
which can be found in the react
monorepo. What's more important is that because for any number of reasons the mount and first-run of the effect (think of will-mount and did-mount) can be apart and not have predictable timing.
This in essence comes down to how React schedules updates and renders, and it may: interrupt a render cycle, delay effects after the initial mount for an indefinite & unknown amount of time, suspense may trigger from other parts of your subtree.
All of these lead to one outcome: mounts and effects are out-of-sync. An effect basically catches up to a render. For us this means that after mounting some state has changed. Since in urql
all results are computed this leads to us just updating the state, since changes to it may have occured in-between mounting and the effect, or in-between a render and the effect, or for any of the other timing reasons.
So in some cases you may see an additional update after mounting, after suspense, or after another update.
The reason why you shouldn't care is that this is exactly why effects exist and why renders aren't side-effect "carriers" in React. You should treat re-renders as necessary, frequent, and unpredictable (i.e. idempotent)
That's why other utilities like React.memo
exist to bail out of React updates that may update part of the v-dom when you know that the updates actually won't change any content.
If I have to wrap res in a memo, it defeats the purpose of suspense?
You won't wrap anything. Typically you'll just let things render and if you run into a case where a large subtree is rerendering down the chain, you add a single React.memo
if it's causing actual issues.
but take the example React gives for the data fetch variation of suspense
This is a simplistic example of non-updating data unfortunately.
other libs such as apollo client, only re-render to cache changes or initial fetches, cache-and-network, polls, manual refetches, etc...
That's unfortunately inaccurate. If a hook wants to be concurrent-safe and not deep-diff its outputs (which is often more expensive to do by default; e.g. remember the old discussion of why shouldComponentUpdate
should not deep-diff by default?) then it will trigger more updates rather than less, as a rule of thumb.
This is pretty typical when adding reactive data to React that is derived / computed, push rather than pull.
What if it is a proxy? What if memo can't actually tell the object changed because the underlaying data is hiding via a proxy?
I'm not sure what the point here is. There are pretty strong guarantees in any urql
setup around what data is returned. There are also pretty strict guarantees around what React expects from hooks. But neither plays into this argument well, I'm afraid 😅
common rendering semantics.
So the tl;dr ends pretty well here; Rendering should be treated as an idempotent hence repeatable process that is an implementation detail of your UI catching up to state. The component functions is how we get there essentially, and React actually makes no guarantees these days on what memoised state is preserved, exactly because React wants to be able to re-render freely. In the future it may even start a re-render, interrupt it, then continue it later, or expand its scope if some state changed across multiple sub-trees. This is part of concurrent mode and rather complex, but it's basically React's way of scheduling what it believes is best.
So focus on the existing tools that you have to make renders cheaper, if you need to, and not on reducing the amount of renders.
fair enough :)
Following the latest documentation, putting a useQuery inside a suspended component causes it to render twice on initial load. Still trying to dig through project source to find culprit..
Using 1.11.3