Open yelouafi opened 7 years ago
For info this use case is handled in most of the other virtual dom libs like preact, snabbdom. Inferno is a remarkable case as it will always infer the minimum number of operations (it uses an algorithm to find the longest increasing subsequence on an array containing the indexes of the old elements).
Worth noting, "Longest increasing subsequence" O(n logn)
is theoretically slower than React which is O(n)
. This means Inferno should be faster in the fast case, but slower in the slow case (which is bad)... although realistically you're unlikely to encounter that.
Anyway, this is known, I'm not sure if there are any internal discussions on this at the moment. From my perspective it's problematic; moves in HTML have side-effects and are expensive. Which means you want to make as few of them as possible, but you would also like to do it in a deterministic way so that the resulting behavior is dependable.
Ideally, adding and deleting should not move any existing nodes (React fails this). But there is no universally right approach to moving with the information currently available. You can pick a direction and run with it, that's deterministic (ReactDOM does this, moving stuff to the end always (!) behaves correctly), but it performs very poorly in the other direction. Or you can do something more intelligent like "Longest increasing subsequence", but that can be slower and it will not give you any guarantee as to which nodes will be moved (but should mostly make the right decision, but only mostly).
EDIT: So minimal moves is actually not the preferable metric to test for.
Note: moving audio/video/iframe will cause them to reset/pause and IIRC moving nodes which are currently animating clears the animation, etc. If you're expecting nodes with side-effects to move erratically, the best solution would probably be to use flexbox, as they can be reordered independently of the DOM order and should be cheaper.
Worth noting, "Longest increasing subsequence" O(n logn) is theoretically slower than React which is O(n).
the case of simple moves like above can be handled in linear time. Trimming common prefixes/suffixes of the old and new sequences then handling inversions for example. There are also other preprocessing optimizations that could handle more cases (2 simples edits) in linear time, Some of them are described here (also see this vdom impl) a good reference is https://neil.fraser.name/writing/diff/). In most of the cases the preprocessing suffices to find the minimum number of edits. Inferno uses some of this preprocessing steps so I think it is also faster in the common case.
Worth noting also that if keys are numeric then the internal keymap (used to track moves) will likely end up as an array in the VM which is much faster. Also big O for measuring could be misleading because the cost of doing computations with primitives (more likely arrays & numbers) doesn't have the same magnitude as DOM operations (esp. reflows and style recalculations).
Ideally, adding and deleting should not move any existing nodes (React fails this). But there is no universally right approach to moving with the information currently available. You can pick a direction and run with it, that's deterministic (ReactDOM does this, moving stuff to the end always (!) behaves correctly), but it performs very poorly in the other direction
The well known approach in similar domains (text diffs, ADN sequence alignement) is to find Longest common subsequence. The longest increasing subsequence used by Inferno is just a variation. Finding the LCS is the same as finding the minimum edit script between 2 sequences.
Finding the LCS is known to have an O(MN) cost in the general case which is problematic. But preprocessing optimizations can help reduce the overall cost.
As i said it's worth trying the preprocessing optimizations first. They will likely reduce the length of the problem (or even solves it entirely and thus there will be no need to go with a costly/non optimal hashmap based diff).
There are other non trivial solutions although I'm not sure they could be applicable in the case of React. For example In the vdom implementation above, I used another algorithm that I found very interesting. The algorithm is described in this paper and also this excellent article. The algorithm has a O(ND) cost, D being the number of edits (insertions & deletions) to go from the old to the new sequence. The worst case will be the one where the 2 sequences are totally different. However in most of the cases there are little differences between 2 sequences. In my impl. I fix a max number for D and if the algo. can not find the LCS in that range I fallback to a traditional hashmap approach
Note: moving audio/video/iframe will cause them to reset/pause and IIRC moving nodes which are currently animating clears the animation
I think React should at least handle the case of simple moves. unnecessary moves are likely to trigger weird issues when one loses browser transcient state (For an example see this issue https://github.com/hyperapp/hyperapp/issues/310)
Worth noting, "Longest increasing subsequence" O(n logn) is theoretically slower than React which is O(n)
I dont think React reconciliation is O(N) because it's itself using a hashmap for tracking moves
Inferno is a remarkable case as it will always infer the minimum number of operations (it uses an algorithm to find the longest increasing subsequence on an array containing the indexes of the old elements).
It doesn't always have a minimum number of ops. For example, a b => b c
will move node b
with a prefix/suffix step, then remove a
and insert c
.
Worth noting also that if keys are numeric then the internal keymap (used to track moves) will likely end up as an array in the VM which is much faster. Also big O for measuring could be misleading because the cost of doing computations with primitives (more likely arrays & numbers) doesn't have the same magnitude as DOM operations (esp. reflows and style recalculations).
React never uses numeric keys and will move to Map in the next version IIRC.
I dont think React reconciliation is O(N) because it's itself using a hashmap for tracking moves
Hashmaps are asymptotic O(1). Anyway, that was just a side-track.
The well known approach in similar domains (text diffs, ADN sequence alignement) is to find Longest common subsequence.
Textdiffs are often bad in Git for example, often coming to the conclusion of illogical changes rather than what the human actually did or what makes logical sense. This is the same problem. Minimal changes but illogical conclusion is bad when there are side-effects.
The worst case will be the one where the 2 sequences are totally different. However in most of the cases there are little differences between 2 sequences.
Also worth noting that React largely favors "fast enough if the worst case is faster". Because no one would really care all that much if it takes 9ms vs 10ms, but if it takes 10ms vs 100ms in the worst case then that is a huge problem.
I think React should at least handle the case of simple moves. unnecessary moves are likely to trigger weird issues when one loses browser transcient state (For an example see this issue hyperapp/hyperapp#310)
Well, it's a matter of perspective. If you have a list which you always append or move to the bottom, e.g. some kind of activity tracker (which pushes updated articles to the bottom). Then React actually always exhibit the correct behavior and all the others wrong (!), React will only move the elements which have actually moved in that direction whereas the others will behave somewhat randomly.
However, since the direction/algorithm is not configurable the current state of the reconciliation in React is still poor in avoiding side-effects in general. Unfortunately there is no general solution to this other than making the algorithm configurable or optionally tagging elements with e.g. some kind of incrementing "move priority id" so that moved elements can be singled out when side-effects are important. However, for cases where side-effects are unimportant, then LCS or something similar is likely the best approach yes (but only because moves in the DOM are so unreasonably expensive).
The problem of diffing is greatly simplified in the case of virtual DOM since we assume that keys should be unique. But anyway, I'm not advocating that React should use LCS for reconciliation. That's just one way among many to solve the problem and for sure with its own tradeoffs.
What I'm saying is that React should support more cases of 'simple edits' like the one above for multiple reasons (perf. is just one). And that those cases could be supported efficiently in linear time/space.
Depending on the implementation details of co-optive scheduling "React", doing both a common prefix and suffix ----> <----
optimization to archive this might be a hard sell if React is intentionally trying maintain a heuristic of visiting nodes in on direction --->
as opposed to both ----> <----
.
Do you want to request a feature or report a bug?
Not sure if it's a bug or an 'accepted' behavior. But this can affect performance in some situations or even 'break the expectations' in others (e.g. animating moved elements [i.e. simple moves])
What is the current behavior?
When a child element moves from the end of the list to the front React actually moves all the other elements after the moved/last element instead of simply inserting the moved element at the front of the list.
This also can be stated more generally for an element or a block of elements moving backward with a significant shift.
If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem via https://jsfiddle.net or similar (template: https://jsfiddle.net/84v837e9/).
Here is a demo that shows the DOM operations performed on DOM nodes (moves & insertions) during reconciliation. To reproduce the issue
type '0123456789x' in the input field then click
Patch!
now type 'x0123456789' (move the last 'x' to the front) then click
Patch!
againHere's the output
Instead of moving the 'x' to the front. React actually moves all the other elements after the 'x'
Note: the demo uses MutationObserver api to find out the operations. But you can also verify this behavior directly by commenting out the code that activates the dom observer (in componentDidMount) and watch the dom operations manually in the devtools element inspector
What is the expected behavior?
React should perform the minimal number of operations. I know that the 'minimum' will vary for each situation and not trivial to infer for the general case. But for some common cases like this one it should be feasible.
For info this use case is handled in most of the other virtual dom libs like preact, snabbdom. Inferno is a remarkable case as it will always infer the minimum number of operations (it uses an algorithm to find the longest increasing subsequence on an array containing the indexes of the old elements).
I found this behavior while working on a demo to find out how vdom libs rearrange nodes during children reconciliation. For example here is the same output for other libs (demo)
Which versions of React, and which browser / OS are affected by this issue? Did this work in previous versions of React?
The demo uses the 0.16 version. But I tried with 0.15 and it has the same behavior