Closed adrian-wozniak closed 3 years ago
Hi @jordanholt, I have another set of performance improvements. Could you take a look on that?
Sure! Thanks @adrian-wozniak, will have a look at this today.
What're your thoughts on extracting the pre-order DFS algorithm that's used several times in src/index.ts
into a higher order function in utils.ts
? The call site could then just look like:
const initialStack = [ node ]
depthFirstSearch((stack) => {
delete this.nodes[traversedNode.id]
// Unregistering overrides
this.unregisterOverride(traversedNode)
// ...
// push something else onto the stack
stack.push(traversedNode.children[0])
}, initialStack)
Think the arrow function should have this
and other variables that the callers are operating on in lexical scope.
Might seem trivial but it keeps the callers focused on the unique logic they're doing.
For the object creation code: https://github.com/bbc/lrud/blob/9199b63e1cbb9896e801b1ccd3b925f53af9b413/src/utils.ts#L257-L316
I understand the goal here.
Would it not be more memory efficient to initialise all objects to the same shape? i.e.
const node: Node = {
id: nodeId,
parent: undefined,
index: undefined,
children: undefined,
activeChild: undefined,
orientation: nodeConfig.orientation,
indexRange: nodeConfig.indexRange,
// ... etc
}
that way they all share a common backing HiddenClass
, but with a lot of fields potentially being undefined. Or are the transitions lightweight enough that they are preferred to storing a single object shape with lots of undefined fields?
Oh, and p.s. massive kudos for your continued contributions to LRUD 🎉
What're your thoughts on extracting the pre-order DFS algorithm that's used several times in
src/index.ts
into a higher order function inutils.ts
? The call site could then just look like:const initialStack = [ node ] depthFirstSearch((stack) => { delete this.nodes[traversedNode.id] // Unregistering overrides this.unregisterOverride(traversedNode) // ... // push something else onto the stack stack.push(traversedNode.children[0]) }, initialStack)
Think the arrow function should have
this
and other variables that the callers are operating on in lexical scope.Might seem trivial but it keeps the callers focused on the unique logic they're doing.
Yes, that's a good idea. Initially, when I started replacing existing behavior with DFS, it wasn't obvious that its pre-order version will fit in every cases we need. To make this method correctly defined in terms of types, I had to introduce new type Tree<Node>
.
For the object creation code:
https://github.com/bbc/lrud/blob/9199b63e1cbb9896e801b1ccd3b925f53af9b413/src/utils.ts#L257-L316
I understand the goal here.
Would it not be more memory efficient to initialise all objects to the same shape? i.e.
const node: Node = { id: nodeId, parent: undefined, index: undefined, children: undefined, activeChild: undefined, orientation: nodeConfig.orientation, indexRange: nodeConfig.indexRange, // ... etc }
that way they all share a common backing
HiddenClass
, but with a lot of fields potentially being undefined. Or are the transitions lightweight enough that they are preferred to storing a single object shape with lots of undefined fields?
That's exactly what I wanted to do, but I thought that having objects with a lot of undefined properties may frustrate "the eye" :). Because of those "human feeling reasons", I tried to make some compromise and pick only those properties that are heavily used. I'm glad you pointed that out and I'm more than happy to do it. :)
I added additional test for setNodeFocusable
to keep branch coverage.
Thanks, looks good! I'll version this as v8.0.0
Perfect! Thank you @jordanholt!
This PR improves LRUD performance and stability. It's a continuation of https://github.com/bbc/lrud/pull/92.
Description
Application where I'm using LRUD has a quite complex LRUD navigation tree. All in all, LRUD performs well, but on low-spec devices its computations are still noticeable.
It's not only about CPU usage but memory consumption. With every key press, memory pick might be observed. Such memory may be easily collected but that's the point. On low-spec devices, where app has limited memory resources, it matters. GC is triggered more often then and that cause performance drop.
The main goal of all changes here is to create node context. While processing a node we have to know what are its parent, active child, child at given position and overrides. With them we can easily decide which way to go for finding next node to focus. The idea is to have references to those objects at hand so there’s no need to search for them again and again having only its ids. From memory perspective it’s not a problem as it’s only reference to the object.
1. Checking if node is a focusable candidate (
isNodeFocusableCandidate
)To check if node contains focusable children its subtree was flattened and then searched for focusable nodes. Flattening was making copies of a nodes, which caused not needed memory consumption. This method recursively traverse whole node subtree. To speed up things,
flattenNode
was removed and if there was a need to traverse node subtree, the pre-order deep first search tree traversal algorithm is used instead. Such algorithm processing is interrupted as fast as possible, when queried condition is met.isNodeFocusableCandidate
method is called quite often while searching for next node to focus, soflattenNode
method was creating a lot of not needed objects. Methods that use pre-order DFS:doesNodeHaveFocusableChildren
,unregisterNode
,registerTree
.2. Accessing the nodes (
getNode
)This method is called many times without a reason. LRUD operates on nodes, but when it comes to calling some other method, it has to pass node id. After that
getNode
needs to be called again, and so on and so forth. With big trees, accessingnodes
object properties became noticeable. To deal with thatNode
object now contains reference to itsparent
andactiveChild
instead of theirid
s. Moreover, globally storedcurrentFocusNodeId
androotNodeId
were replaced with references to nodes:currentFocusNode
androotNode
. Additionally, methodsmoveNode
,unregisterNode
,registerOverride
,unregisterOverride
,setActiveChild
,setActiveChildRecursive
,unsetActiveChild
,assignFocus
,isSameOrParentForChild
,setNodeFocusable
can detect ifNode
object was passed or node id. Thanks to thatgetNode
method is called only when really needed.3. Iterating children, finding child with index
Node's
children
are stored as an object. For most cases LRUD is not interested in childid
but in itsindex
, becauseindex
matters when it comes to finding next/prev focusable candidate. PR https://github.com/bbc/lrud/pull/85 introduced coherent way of maintaining theindex
value. Thanks to that,children
may be stored as an array instead of an object. With that, finding a node with given index is just getting the needed array element. There's no need to iterate overchildren
properties to find a node that haveindex
value equal to searched one.4. Finding an override
Similar case as above, but here iteration goes through all overrides whenever LRUD tries to climb up. To speed up things, overrides are stored on node level. Every node keeps its overrides. Here, using map of type
{ [direction in Direction]?: Node }
is more efficient than having an array. We are always interested in one particular override as we don't have to iterate over it. This has one side effect. Such override may point to a node that is unregistered (does not exist). To avoid that and do appropriate overrides clean up while unregistering node, every node has an arrayoverrideSources
. Such array keeps overrides that points to that object. Thanks to that, while unregistering a node, all obsolete overrides may be removed without additional overhead.Because override id is not needed anymore, signatures of
registerOverride
andunregisterOverride
have changed.RegisterOverrideOptions
can be passed to theregisterOverride
method to configure overwriting existing overrides.5. Other minor changes connected to all above
NodeConfig
type which allows registering nodes and trees. This is mainly to allow setting id of a parent node (as it is right now) and excluding properties thatNode
object contains for internal LRUD calculation.climbUp
,digDown
,setActiveChildRecursive
. There’s no need to put the same method on the stack.prepareNode
method to simplifyNode
object creation from givenNodeConfig
object.NodeTree
was renamed toNodesBag
as it describes to type nature more accurately.Motivation and Context
The main motivation was to make applications using LRUD working smoothly on low-spec devices. On modern laptops, LRUD computation cost is almost not noticeable. Situation is changing dramatically when app is lunched on low-spec set-top boxes. Users are experiencing app freezes because of underlying JavaScript computations and memory GC executions.
How Has This Been Tested?
Tested by unit tests and manually in an app using LRUD library where such issue was found.
Screenshots (if appropriate):
No screenshots.
Types of changes
Checklist: