claudiopro / react-fiber-vs-stack-demo

⚡️ React Fiber vs Stack Demo
https://claudiopro.github.io/react-fiber-vs-stack-demo
MIT License
143 stars 45 forks source link

Is Fiber actually fast in this demo? #4

Closed jsobell closed 5 years ago

jsobell commented 6 years ago

Here's the same demo using Aurelia: http://aurelia-thousand-nodes.bigopon.surge.sh/

I'm a bit surprised to see the Aurelia one performs so much better than Fiber on CPU usage during our tests, and the nodes also always update correctly under Chrome. Am I missing something here?

ryansolid commented 5 years ago

I think the point is less about raw performance but about scheduling. Fiber is a huge improvement for React, but other approaches are going to be faster for fine grained nested data changes for sure. What is unique about Fiber is how it can choose to do less work under load. One might consider this cheating from a benchmark perspective perhaps, but it isn't about how fast it is when it's already fast enough, but how fast it is when it is under more extreme load. It slows down more gracefully, or atleast in theory.

jsobell commented 5 years ago

The problem of maintaining smooth animation is a very real one, and if you are prepared to prioritise scaling of an element over it's content, you could adopt this approach. But if (like most people) the data itself is the most important aspect, and animation is visual candy, this approach seems to need re-thinking and the DOM updates should be given higher priority than animation. I would much prefer slow animations to non (or partially) updated content. High load situations should sacrifice animation fluff, and data values should be first priority. I can understand what Fiber is trying to do here, but to me it's akin to saying "We won't bother applying validation rules if the CPU is busy". I feel it's misleading and dangerous, because users make decisions based on the rendered content, and in this case I'm still seeing content from 10 seconds earlier.

ryansolid commented 5 years ago

It is interesting choice they made in the implementation here. I do find it a bit alarming that it never updates under continuous load. The approach of requestIdleCallback seems too naive approach. I think I could give up content updates that were coming in too fast as long as I was still getting them at some rate. 10 seconds out of date for example is way past what is acceptable. I guess in practice it wouldn't be the case, but it makes benchmarks pointless.

Perhaps this demo is just a poor example benchmark. I mean I can wrap the state update in a requestIdleCallback in the library of my choice and get the same result and didn't have to rewrite it from scratch. I get the impression this is the beginning but it does no more than what one could write in a one line change.

jsobell commented 5 years ago

Absolutely, throttling or debouncing bindings is a common desirable feature, in fact it's a standard feature on Angular and Aurelia bindings. I'm not sure it's a poor example so much as a demonstration of inappropriate buffering. If an optimisation technique is lossy, it must only lose information considered unimportant by the developer (such as CSS animation). As you say, a forced data update ever _X_ms would suffice, although this may re-introduce the jerky motion, so the performance once again becomes important. Perhaps the performance of the whole underlying render functionality is what needs a review, rather than trying to hide performance issues in this way.