Open smaug---- opened 4 years ago
One area of performance gain is the ability to clone a subtree with parts
where the cloned parts
reference cloned nodes in the new subtree. The use case was discussed in our virtual F2F meeting. After cloning a node tree, JS frameworks need to find references to the new nodes created, usually through some attribute markup. With the new Node.cloneTree()
API, parts
are returned along with the cloned node tree, eliminating the needs to look for these nodes in the new tree.
Another performance benefit that we didn't discuss could be eliminating mutation events during batch updates in the commit phase. Mutation events are depreciated, and this new API doesn't need to keep support them.
This is theoretical. There could be more areas of improvement. We'll need to do a prototype to accurately measure any performance benefits.
That cloning is indeed a good example and there was a reasonable argument, that there should be that such cloning available even for use cases not doing anything with parts.
Browsers can optimize out mutation events if there aren't any listeners anyhow.
One doesn't even need to do a prototype to see most obvious performance gains.
there was a reasonable argument, that there should be that such cloning available even for use cases not doing anything with parts.
What is this referring to here? What would be cloned along with Nodes if not for the parts?
Have a way to map original node to the clone. One could imagine for example some callback taking original node and clone node as params. That would let JS to clone whatever needs to be cloned and update properties on nodes and what not.
That would be interesting, but it seems like a customizable tree walk. As a library author I really wouldn't want that callback on every node in the tree, just the ones I'm interested in. I want to reduce tree walks and JS wrapper object generation during hot code paths.
That was just the concept. It could be optimized with some filters: elements names or existence of some attributes or properties etc. Anyhow, that is a lower level primitive which parts proposal seems to need. If there was such API in the platform, could parts utilize it? And if so, what else there is in parts which improve performance, and how?
I tend to think of Parts as the lower level API here. Instead of filters, which would require adding attributes and wouldn't be able to easily represent child node ranges in the first place, you programmatically mark the node you're interested in having references to post-clone, and then get those references.
The low-level API could be something like const { clonedTree, clonedReferences } = clone(subtree, { giveMeReferencesToClonesOfTheseNodes: [node1, node2] })
.
That would work
The low-level API could be something like
const { clonedTree, clonedReferences } = clone(subtree, { giveMeReferencesToClonesOfTheseNodes: [node1, node2] })
.
I like this direction, I hope we can eventually expand this to see the whole frame.
One area where I hope a first-class Parts representation will help with is with representing with a future syntax. This would be especially useful to template libraries that insert marker nodes into HTML before parsing by the browser, then walk the parsed tree to discover the markers.
Having the browser's parser do this would eliminate a lot of code and tree walks, and the logical data structure for this parse with parts operation is a fragment (or template) plus parts. A list of Nodes wouldn't be sufficient as you want to represent a child range with a start/end pair, and attribute expressions with an element/name pair.
The low-level API could be something like const { clonedTree, clonedReferences } = clone(subtree, { giveMeReferencesToClonesOfTheseNodes: [node1, node2] }).
In addition to the points raised by @justinfagnani, I think this API isn't that ergonomical to use. The caller is required to provide a list of nodes in the subtree
. This means the caller needs to know the structure of the subtree. UA also needs to checks each node in the subtree against this node list during the clone process.
Compare this to the proposed API, const {node, partGroup} subtree.cloneTree(options);
. Once the subtree has been marked with parts
, the caller can just clone the subtree and get back a list of parts
that's ready for update.
@yuzhe-han I feel like I'm missing something. With parts the caller also needs to know the structure, how would they have created the parts otherwise? And the UA would have to do something similar as well in order to clone the parts.
Yes, the developer marking up the DOM tree with parts
have to know its structure. But someone else could be doing the cloning and updates.
For example: we can have a template that represent a row inside a <my-table>
component. The table author can markup this row template with parts
. The user of this component can just clone this template with parts and updates them. It allows better encapsulation because there's no need to expose a list of clonedReferences.
Coming back the performance topic. Here's a good use case for batching, issues/895.
The issue outline use cases for updating multiple nodes and their attributes. Currently, there's a big performance cost because each update is a traversal from JS, through the binding layer, down the browser's internals. Batching will enable staging of these updates in JS, and then commit all the updates inside, staying within the browser internals in one go.
But with parts proposal you still need to go from JS -> native parts impl -> (commit) -> dom. It doesn't seem to optimize JS -> native out.
I'm not sure I see the encapsulation argument. Either way it seems you could only give the user of a component a clone of the tree and references into it. With parts the references might be abstract, but through value
and commit()
it's pretty trivial to make them concrete, no?
@smaug---- I think you are right. Optimizing the JS -> binding layer-> native doesn't seem like it would result in a big win. However, there are still areas for optimizing that can result in big wins. This includes minimizing the effect of attributes and properties that can cause reflow. Ex: class, style, classlist, etc. The batch commit can prevent reflow until all DOM mutations have been completed.
Reflow cannot happen between tree mutations today, unless you do something that forces it, but mutating the tree wouldn't.
@annevk Thank you for the correction. You are right. I overlooked the fact only certain properties, like clientTop, clientLeft needs to be read before triggering reflow.
Thanks, everyone, for contributing. Let me summarize and see if we can continue to make progress on this issue.
All comments indicate we agree that an API for returning a set of references to cloned nodes of a subtree will be beneficial, improving performance via saving additional tree walks for node lookups. We have two proposals on the shape of this API: 1:
const { clonedTree, clonedReferences } = clone(subtree, { giveMeReferencesToClonesOfTheseNodes: [node1, node2] }).
2:
Part objects: NodePart, AttrPart, PropertyPart.
The low-level API [1] returns a set of references in the cloned tree, but it's missing some key features of the Parts API provide. These include:
Both of these two API proposals have their benefits, and I think they can both exist. Having [1] doesn't eliminate the need for [2]. We can leave their usage to developers and framework authors.
Coming back to the Parts API, do we agree that the ergonomic benefits, positive developers' reception, and performance gains are enough evidence to start working on these APIs?
@smaug---- @annevk @justinfagnani
I think the more relevant question (and the question this issue poses if I understand it correctly) is whether 2 could be done on top of 1 in userland or whether there are performance benefits to doing 2 natively that are not realized by 1 alone.
Certain benefits can’t be realized when building [2] on top of [1] in the userland. Let me illustrate with a couple of examples.
Here's a node tree. After cloning with [1], cloned references of start
and end
nodes are returned.
container
├─start
├─ A (to be replaced)
└─ end
A userland function that replaces all the nodes between start
and end
has to remove all the existing nodes, then insert new nodes in between. For removal, use the range object:
range.setStartAfter(start);
range.setEndBefore(end);
range.deleteContents();
Then insert a list of replacements, nodes, either with a loop -- container.insertBefore();
-- or insert a dummy childNode and use childNode.replaceWith(...nodes);
.
However, with Parts API, replacing the range of nodes is straightforward.
childNodePart(container, start, end);
childNodePart.values = nodes;
childNodePart.commit();
The userland solution requires more script. Thus, it’s less performant due to additional trips through the binding layer, possibile calls to synchronous mutation events, etc.
Parts
batching. Ex: element and its reactions to DOM mutations.The order of modifying DOM attributes can create side effects that [1] doesn't address. Changing 's crossorigin
, referral
attributes will cancel pending request (step 15). Framework authors have to take special precautions to minimize their impact.
With the attribute Parts API, relevant mutations can be batched together to minimize side effects and improve performance. In addition, it reduces the developers' need for a deep understanding of all the potential side effects.
Overall, I think Parts API promote better programming practices by streamlining DOM updates and reduces the likelihood layout thrash where property reads can intertwine with updates. I hope these examples show their benefits.
Couple thoughts:
DOMChangeList
), though it remains to be seen how that works in practice as developers might not considering running the href
setter or some such to be a tree mutation.WCCG had their spring F2F in which this was discussed. Present members of WCCG identified an action item to take the topic of DOM Parts and break it out into extended discussions. You can read the full notes of the discussion (https://github.com/WICG/webcomponents/issues/978#issuecomment-1516897276) in which this was discussed, heading entitled "DOM Parts API".
As this issue pertains to DOM parts, I'd like to call out that https://github.com/WICG/webcomponents/issues/999 has been raised for extended discussions and this topic may be discussed during those sessions.
https://github.com/rniwa/webcomponents/blob/add-dom-parts-proposal/proposals/DOM-Parts.md as an example doesn't exactly explain how it achieves performance improvements. There is a risk that a proposal tries to just improve performance on one particular kind of DOM implementation when there might be other approaches which have already solved those issues in general.
@rniwa @annevk