I've been working on improving/fixing Maverick over the past week and thought I'd share some of the learnings/discoveries here.
I benched across cellx, reactively, and the JS framework bench so I can get a well-rounded view of what's happening both in terms of raw perf and mem. These are the best bencharmks I know of at the moment but would love to find more and build some if I have time.
Signals
I tried everything from various data structures (linked lists, k-ary graphs, arrays), primitives (strings/symbols/prototypes/functions), and heaps of other stuff. The solution I landed on was:
For scope tracking (called owners in Solid) I use a linked list where each new computation is a node in the list with owner/prev/next pointers. All child scopes appear directly after their parent. Only root and effect can create a new scope. What's nice about a linked list is it makes it super easy to dispose of all child computations inside an effect scope when it re-runs.
I copied the same source/observer tracking scheme that Modderme used in Reactively.
I moved effects to run in their own queue and added something simple I call zombie detection so you can infinitely nest effects. Basically effects won't run if an owning effect scope is dirty.
I found using a symbol combined with bitwise ops for specifically tracking a limited set of state and scopes boosted performance like crazy especially on smaller data sets.
Maverick strikes a nice balance between sync/batched. Everything is batched by default on microtask queue, but you can call tick() for a sync flush (thanks to a suggestion by Jin!). This gives the dev that fine-control when needed.
Added event delegation support (off by default to save on size).
Fixed non-hydration element selectors.
Runtime
Replaced the insert expression implementation with something similar to DOM expressions that Solid uses. We don't rely on marker nodes using comments for insertion anymore. This also accounts for reconciling arrays using udomdiff which our previous implementation didn't.
Lifted up setting attributes/styles/classes outside of a internal function to directly inside the component declaration to avoid creating closures. This was a massive memory leak.
SSR
No exciting changes here at the moment - I still need to find a good benchmark. I think Marko has one.
Benchmarks
Intro
I'm only comparing libraries that are of similar color apples. Comparing to libraries such as Preact Signals or some of the others makes no sense. They're all amazing libs but no point disingenuously comparing them when they don't account for scopes, context, error handling, nested effects, large dynamic graphs, etc.
I had shared some results previously on Twitter which some of you may have seen. I found out later when running JS framework bench that it wasn't a complete solution since it had mem leaks and other major issues so I disregarded those numbers.
With these recent changes I was able to achieve similar perf results whilst removing all issues and supporting a slightly bigger feature set with things like proper effect scopes and deeply nesting them.
CellX Bench (Signals)
The Cellx benchmark saw a massive speed jump but this bench is mostly useless since it doesn't account for scope disposal and other dynamic stuff. I found that it's essentially only testing two vectors in a single and very long compute subtree: how eagerly operations are queued and how aggressively its cached. You can essentially BS this benchmark and learn nothing new about your lib.
Reactively Bench (Signals)
The reactively bench saw a huge jump and considering Maverick is well tested which now supports a slightly wider-feature set with things like effect roots and disposal I was really happy to see these numbers. However, these numbers don't translate into the DOM (more info next section). I'd also guess that most the gains here were specifically from the source/observer tracking scheme created by Modderme.
I need to get a better picure of memory and GC times but I can just say for now they dropped by ~60% or so from the numbers I was scanning over time. You can ignore this until I have better proof.
Big wins here are amazing. It might not translate into improvements in our DOM runtime but it will for everything non-DOM related - raw number crunching and side-effects. This is something I personally need for the Vidstack Player analytics library I'll be creating.
This is started out really bad and took me some time to weed the issues out. I was mostly concerned about memory as the rest is not super important just yet. Either way, everything has been resolved and we're in the green with Solid-like performance.
Review
This is where things get tricky because it's mostly looking at the DOM runtime. I tested with both the Maverick compiler and the Babel DOM Expressions compiler Solid uses and found results to be either the same or ever-so slightly slower than Solid.
I think the problem here is that Solid is hyper-optimized for that benchmark :sweat_smile: I copied maybe 90% of the same optimizations but couldn't beat Solid in any meaningful way. I'll wait to see what the official results end up showing.
Memory was especially difficult to beat Solid which is weird because I thought the linked list solution for scope tracking which doesn't require creating any arrays would be a huge win. I think the answer here is that V8 can optimize arrays really well, and the Solid runtime is more efficient but slightly bigger in size.
Obvious finding: Improving performance across compute, memory, and library bundle size is very fucking hard. They're generally at odds with each other and I think Ryan has done a phenomenal job optimizing Solid. I think at this stage it would be about revising compilation and the DOM runtime if we want to see big wins. I think most micro-optimizations have been discovered and I can't see us going much further from here. I guess the last part would be finding more ways to do less or no work at all until absolutely required. Service workers could be an interesting exploration.
On a side note for anyone interested I weeded out a lot of memory issues by:
Using event delegation as event listeners are super expensive, especially if large DOM trees are repeatedly created/destroyed.
Disposing of scopes correctly which includes ensuring all pointers are nulled and sources/observers are cleaned up.
Avoiding deep closures that end up retaining pointless data. I found out I had created a shit ton of effect closures for things like setting attributes, properties, styles and so on. It gzipped more efficiently but was horrific with respect to performance.
Using a POJO and binding it to a global function seemed to be much more efficient than anything else I tried. V8 seems to love bindings these days. I'm pretty sure a while ago it hated it (i.e., wasn't optimized).
Final Notes
Part of this work was to envision what Solid 2.0 could look like on the signals side (excluding DOM, hydration ,etc.). The important thing to remember is that you achieve these numbers more realistically with Maverick than Solid today since we batch out of the box. I'm not sure how much of a penalty handling nested effect scopes is since we have to dispose every run. Either way, I think there's some desirable changes or ideas here that I'd love to see make their way over. Maverick shows what's possible in terms of signals perf/mem whilst making certain trade-offs that I think are debated by the Solid community today. I think there's some DX wins here at the minimum.
Maverick is hugely inspired by Solid and it only exists because I wanted to veer the focus and attention towards UI libraries and Custom Elements. This meant batching out of the box, effect scopes, perf/size tradeoffs, and unique take on how Custom Elements are handled. The compiler/runtime is mostly similar today which means the perf is mostly the same, but I expect it to change over time as I better understand what vectors to optimize UI libraries for. The interesting parts of Maverick are not in performance, but rather the DX when building libraries with Custom Elements. Maverick has an API analyzer and framework integrations that wouldn't make much sense in Solid today.
I'd love to see the continued exploration and growth of the Reactively bench. It's the best tool we have right now for benching dynamic graphs which are at the heart of what signals are built for. The problem is that I'd still consider it static with respect to a true application and I'd love better graphing features out of the box. I think a nice next step would also be porting the benchmark out of Reactively and using it as the grounds for an "official" signals benchmark (including a mix of memory and DOM-focused tests).
The JS framework benchmark is an awesome simple tool for reviewing the DOM runtime implementation and the most complex part of UI libraries (keyed lists). It helped me resolve all DOM list-related issues and various memory leaks. The problem is that it's only looking at UI frameworks through a narrow lens and most libraries have either done BS optimizations that don't accurately represent how users build in the wild, or some of the libraries listed there that are really fast are not usable from a DX perspective.
I've been working on improving/fixing Maverick over the past week and thought I'd share some of the learnings/discoveries here.
I benched across cellx, reactively, and the JS framework bench so I can get a well-rounded view of what's happening both in terms of raw perf and mem. These are the best bencharmks I know of at the moment but would love to find more and build some if I have time.
Signals
I tried everything from various data structures (linked lists, k-ary graphs, arrays), primitives (strings/symbols/prototypes/functions), and heaps of other stuff. The solution I landed on was:
root
andeffect
can create a new scope. What's nice about a linked list is it makes it super easy to dispose of all child computations inside an effect scope when it re-runs.tick()
for a sync flush (thanks to a suggestion by Jin!). This gives the dev that fine-control when needed.Complete solution can be viewed here
Compiler
The compiler was mostly in good shape.
Runtime
SSR
No exciting changes here at the moment - I still need to find a good benchmark. I think Marko has one.
Benchmarks
Intro
I'm only comparing libraries that are of similar color apples. Comparing to libraries such as Preact Signals or some of the others makes no sense. They're all amazing libs but no point disingenuously comparing them when they don't account for scopes, context, error handling, nested effects, large dynamic graphs, etc.
I had shared some results previously on Twitter which some of you may have seen. I found out later when running JS framework bench that it wasn't a complete solution since it had mem leaks and other major issues so I disregarded those numbers.
With these recent changes I was able to achieve similar perf results whilst removing all issues and supporting a slightly bigger feature set with things like proper effect scopes and deeply nesting them.
CellX Bench (Signals)
The Cellx benchmark saw a massive speed jump but this bench is mostly useless since it doesn't account for scope disposal and other dynamic stuff. I found that it's essentially only testing two vectors in a single and very long compute subtree: how eagerly operations are queued and how aggressively its cached. You can essentially BS this benchmark and learn nothing new about your lib.
Reactively Bench (Signals)
The reactively bench saw a huge jump and considering Maverick is well tested which now supports a slightly wider-feature set with things like effect roots and disposal I was really happy to see these numbers. However, these numbers don't translate into the DOM (more info next section). I'd also guess that most the gains here were specifically from the source/observer tracking scheme created by Modderme.
I need to get a better picure of memory and GC times but I can just say for now they dropped by ~60% or so from the numbers I was scanning over time. You can ignore this until I have better proof.
Big wins here are amazing. It might not translate into improvements in our DOM runtime but it will for everything non-DOM related - raw number crunching and side-effects. This is something I personally need for the Vidstack Player analytics library I'll be creating.
JS Framework Bench (Signals + DOM)
Tested locally but I've submitted a PR.
This is started out really bad and took me some time to weed the issues out. I was mostly concerned about memory as the rest is not super important just yet. Either way, everything has been resolved and we're in the green with Solid-like performance.
Review
This is where things get tricky because it's mostly looking at the DOM runtime. I tested with both the Maverick compiler and the Babel DOM Expressions compiler Solid uses and found results to be either the same or ever-so slightly slower than Solid.
I think the problem here is that Solid is hyper-optimized for that benchmark :sweat_smile: I copied maybe 90% of the same optimizations but couldn't beat Solid in any meaningful way. I'll wait to see what the official results end up showing.
Memory was especially difficult to beat Solid which is weird because I thought the linked list solution for scope tracking which doesn't require creating any arrays would be a huge win. I think the answer here is that V8 can optimize arrays really well, and the Solid runtime is more efficient but slightly bigger in size.
Obvious finding: Improving performance across compute, memory, and library bundle size is very fucking hard. They're generally at odds with each other and I think Ryan has done a phenomenal job optimizing Solid. I think at this stage it would be about revising compilation and the DOM runtime if we want to see big wins. I think most micro-optimizations have been discovered and I can't see us going much further from here. I guess the last part would be finding more ways to do less or no work at all until absolutely required. Service workers could be an interesting exploration.
On a side note for anyone interested I weeded out a lot of memory issues by:
Final Notes
Part of this work was to envision what Solid 2.0 could look like on the signals side (excluding DOM, hydration ,etc.). The important thing to remember is that you achieve these numbers more realistically with Maverick than Solid today since we batch out of the box. I'm not sure how much of a penalty handling nested effect scopes is since we have to dispose every run. Either way, I think there's some desirable changes or ideas here that I'd love to see make their way over. Maverick shows what's possible in terms of signals perf/mem whilst making certain trade-offs that I think are debated by the Solid community today. I think there's some DX wins here at the minimum.
Maverick is hugely inspired by Solid and it only exists because I wanted to veer the focus and attention towards UI libraries and Custom Elements. This meant batching out of the box, effect scopes, perf/size tradeoffs, and unique take on how Custom Elements are handled. The compiler/runtime is mostly similar today which means the perf is mostly the same, but I expect it to change over time as I better understand what vectors to optimize UI libraries for. The interesting parts of Maverick are not in performance, but rather the DX when building libraries with Custom Elements. Maverick has an API analyzer and framework integrations that wouldn't make much sense in Solid today.
I'd love to see the continued exploration and growth of the Reactively bench. It's the best tool we have right now for benching dynamic graphs which are at the heart of what signals are built for. The problem is that I'd still consider it static with respect to a true application and I'd love better graphing features out of the box. I think a nice next step would also be porting the benchmark out of Reactively and using it as the grounds for an "official" signals benchmark (including a mix of memory and DOM-focused tests).
The JS framework benchmark is an awesome simple tool for reviewing the DOM runtime implementation and the most complex part of UI libraries (keyed lists). It helped me resolve all DOM list-related issues and various memory leaks. The problem is that it's only looking at UI frameworks through a narrow lens and most libraries have either done BS optimizations that don't accurately represent how users build in the wild, or some of the libraries listed there that are really fast are not usable from a DX perspective.