Closed zmitry closed 6 years ago
I want to comment quickly so there is something here but I need to give a longer response that provides more context. The short answer is that transitions are designed to describe lazy operation on state. They provide a way to describe the result of the operation without performing the actual operation until the state is needed. This is different than the eager approach that other libraries use.
Brielfy, you could use a map operation that would make your transition lazy. But I think we need to design a different benchmark that matches how microstates designed to be used. I will write more about all of this on Monday.
@taras can u also take a look at memory tests. It looks like there are some memory consumption overload. https://github.com/zhDmitry/statem-perf
* microstates
create: 4.8 ms
update: 9.07 s
heap:
total 879.8 Mb
used 826.5 Mb
rss: 882.7 Mb
* mobx
create: 197 ms
update: 1.62 ms
heap:
total 8.9 Mb
used 17.3 Mb
rss: 8.6 Mb
Good one ☝️ thanks. I’ll review it tomorrow.
Hi @zhDmitry, thank you for taking the time to benchmark Microstates. It'll be important to have good benchmarks to measure Microstates' performance against. This kind of work is first step in that direction.
Most of the work that we've done so far has been API design and implementation to test the APIs. Our goal at this stage was to implement a solution that touches on what we believed to be important considerations. We wanted to create enough foundation for us to be able to have a tangible conversation about the concept of Composable State Primitive for JavaScript.
The underlying assumption all along the way has been that we can optimize the internals once the external API is stable. This kind of benchmarking is the first step towards having a well balanced performance profile that matches the uses cases that Microstates is designed for. Microstates being a fairly young project, there will be a lot of room for improvement.
One of the challenges of benchmarking Microstates is choosing the right benchmarks. Some benchmarks simply will not be appropriate. For example, if you were to create a benchmark that uses React components to concatenate strings, then your results would make no sense relative to alternatives. That's because React components are designed to manage DOM not strings. Microstates are similar to React components but for state.
Microstates is designed to surface significant states in the application. For example, when thinking about drag and drop, the change from not dragging
to dragging
is significant. Change from x=1000, y=1000
to x=1001, y=1000
is not significant. This distinction is important when humans think about organizing state. Microstates is primary designed to serve the benefits of humans designing complex applications.
How does this consideration for needs of developers translate to code? Firstly, Microstates is designed to allow developers to express state transitions in a lazy way. Laziness makes a bet on the fact that not all states will be necessary all the time. This is different than eager, which assumes that the result of the operation has to be available immediately.
We can see this in the existing benchmark,
update() {
let update = this;
for (let i = 0; i < MAX * MODIFY_FACTOR; i++) {
update = this.items[i].todo.set(Math.random());
}
return update;
}
This transition is imperatively setting todo items. This will be slow in Microstates because there is a lot of work being done to perform this operation. I can get into specifics about the internals of transitions but I will skip that for now. Here is how you would do this in Microstate's way,
update() {
return this.items.map(todo => todo.set(Math.random());
}
Then to create this data, you would do something like this,
let microstate = create(Items, { items: generateDraft() });
let next = microstates.update();
This should be very fast because there is very little work being done. The work is done when you start pulling data out of the component.
next.items[x].state
This is what I mean when I say that the transition represents a future state, it is not actually doing the transition until you read it. This is designed to work especially well with async rendering framework or async libraries which is basically any view layer or asynchronous javascript.
I'll stop here. I'd like to hear your thoughts before we continue working on this.
Thank you for you contribution.
"Here is how you would do this in Microstate's way," @taras it's funny but approach you provided is the slowest. (Actually it does a bit different thing comparing to other frameworks). The fastest way to make required update is "store4.items.set(reducer(store3.items.state));" compute data before update and just make set but event with this optimisation it's 20x slower than mobx and 40x slower that redux. I can't be sure in this tests accuracy but them shows how state management works with large data set and multiple updates of this dataset. you can take a look at my implementation https://github.com/zhDmitry/statem-perf/blob/47283ea89ac9fd4bab08a8928efd320a9d09797b/suite.js#L130 So I guess for first time we need to find some workaround to make it performant as well as mobx (or 2x worse).
store4.items.set(reducer(store3.items.state));
This would be fastest because your operation is done in regular JavaScript. In the same way that ['hello', 'world'].join(' ')
is going to be faster than React.render(['hello', 'world'].map(item => <span>{item}</span>).innerHTML
;
Can you describe a use case that you're considering when thinking about these tools? It would be helpful to understand what lens you're looking at microstates from.
re: store4.items.set(reducer(store3.items.state)) the problem that even with this optimization it works pretty slow just 13 op/s comparing to 2000 op/s in mobx. @taras I was looking for solution for composable states which will work independently to react. So my use cases are:
That's cool because that's exactly what Microstates is designed for. Unfortunately, the benchmarks that you used do not really reflect any of these use cases.
We're aware of performance pitfalls in current version of microstates. We're also aware of changes that we can make to improve things. We intentionally did not optimize because we were designing the APIs first. By focusing on the idea and by exploring the APIs we are able to surface people who're interested in this solution (such as yourself).
In the near future, we're going to be looking at performance improvements that we can make. I'm sure there are lots. If you're interested in having what you described above, then work with us to make it happen.
If you'd like to help, we can talk about different approaches that could be used to optimize Microstates's internals. Part of this process will be designing benchmarks that actually reflect real life use cases. Doing direct comparisons with Redux and MobX will only take you so far because they're not same kind of tools.
If you'd like to compare similar solutions, you could look at Immutable.js and mobx-state-tree. They have closer performance profiles to Microstates than Redux and Mobx. There are also aspects of performance tuning that you're probably not considering - we could talk about those as well.
Regardless, thank you for taking your time to have this conversation.
As of version 0.11.0, microstates is fully lazy in its evaluation of the tree.
Hi, guys tried to run some perf benchmarks with your lib and results look weird https://github.com/zhDmitry/statem-perf . Can u clarify why the update results look so bad ?