Closed knoopx closed 6 years ago
We have not included much perf improvement yet, we're working mostly on stability right now, but is that much of items in an array a real use case? :)
Not really, I'm dealing with ~15.000 items but with far more properties than just a string. Here's the actual model: https://github.com/knoopx/plex-music/blob/mst/src/store/album.js
The app is an electron-based music player that holds, in memory, all my music on Plex Media server. I know the use case may not be realistic but performance is perfectly fine with mobx (just a few hundred megabytes of memory).
Uhm, can you please try with the MST version 1.0.0? I changed the array reconciler, and I am curious to see if there is any actual performance change.
Results running the above example under mst@1.0.0:
diff --git a/package.json b/package.json
index ef4d331..23fb1aa 100644
--- a/package.json
+++ b/package.json
@@ -55,7 +55,7 @@
"lodash": "^4.17.4",
"mobx": "^3.3.1",
"mobx-react": "^4.3.3",
- "mobx-state-tree": "^1.0.1",
+ "mobx-state-tree": "1.0.0",
development
mst: 3577.899169921875ms
mobx: 0.063720703125ms
production
mst: 3780.85986328125ms
mobx: 0.02099609375ms
I would say there's no actual perf difference between 1.0.0<->1.0.1 when just replacing an empty array with 100.000 items.
development mst: 3577.899169921875ms mobx: 0.063720703125ms
development mst: 744.248291015625ms mobx: 0.052001953125ms
Uhm, so 1.0.1 is faster indeed
Anecdotally, we've been seeing initializing a store with about 800 models having about five or six simple fields (2 strings, a number, and an ID) taking around 200ms with MST 1.0.1. It's not terrible, but it's definitely chewing up some of our perf budget before we've really gotten to the complex stuff ;)
@adamsanderson Does it performs better with 1.0.0? :)
It's seemed pretty consistent for us between versions. I can grab a profiler dunno if that would be helpful, though I think my scenario is pretty similar to the example above.
On Oct 11, 2017 12:53 AM, "Mattia Manzati" notifications@github.com wrote:
@adamsanderson https://github.com/adamsanderson Does it performs better with 1.0.0? :)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mobxjs/mobx-state-tree/issues/440#issuecomment-335718453, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAkza-FtB3__wzTklJAjILr2GDLqOIAks5srHPbgaJpZM4PygvM .
We've run into this with our tree data as well. Currently on 1.0.0, so will see if the upgrade to 1.0.1 helps and report back.
me too — its crashing chrome before I can load ~400 complex objects
As said before we havent looked yet at performance improvements, if someone wants to profile it a bit to understand which functions are taking so long, we can improve accordingly :)
I've attached a profile and it looks as if most of it is happening within the array replace nested object creation, but I'm not that familiar with js optimization.
Profile-20171011T202546.zip
I had to reduce it down to 10 objects to run the profile as otherwise it crashes the page with the profiler running.
We can help. This would be really critical for us.
^ I'm with my fellow React Native brethern, Sanket here. I'd love to help out in anyway.
One very corner-case thing I've noticed when using frozen
is the act of freezing objects seems to incur a lot of overhead. Especially on Android devices.
I know of other libs that opt out on prod builds.
I'd love to see some kind of types.yolo()
that would be frozen
minus the freeze
. Or preferably, some kind of switch to turn off freeze
/isFrozen
just in prod?
Yeah, that was already a TODO in the code, that would be a starting point for performance improvements for sure, node.ts and mst-operations.ts contains a lot of those perf todos in comments :)
https://github.com/mobxjs/mobx-state-tree/blob/master/src/core/node.ts#L269
Yeah, I'd opt to not freeze at all in prod mode :)
Performance PR's are welcome!
Note there are quite some // optimization
comments in the code which indicate clear possibilities for optimizations. Without really testing I think the following things might help big time:
Node
is quite a complicated and generic object, but for immutable values (primitives) we should probably create a cheaper and much version of it (ImmutableNode
) that doesn't support attaching onPatch / onSnapshot handlers etc etc. This one might not be trivial btw@computed
), like root
, environment etc.null
instead of []
, that makes object creation cheaper but requires a bit more pre-condition checking in the code. Shouldn't be too complicated eitherAnother possible improvement would be having a computed value based on parent signaling if any of the parent have any listener for onPatch/onSnapshot or onAction, and don't bubble up that event at all if any of that is not present! :)
So after some testing today, our issue seems to stem from how deeply nested our tree goes. We have a Stage model that gets children
and those children
can have children
. In practice, I don't think we go more than a couple levels deep, but the root creation seems to get exponentially worse based on the depth of the data. I'm also not sure yet the impact the recursive types are having here.
const ChildTypes = () => {
return types.array(
types.union(
snapshot => {
const getModel = Type => {
const modelTypes = {
'Button': () => Button,
'Checkbox': () => Checkbox,
'Container': () => Container,
'Dropdown': () => Dropdown,
'Embed': () => Embed,
'Input': () => Input,
'List': () => List,
'ListItem': () => ListItem,
'Radio': () => Radio,
'Slider': () => Slider,
'Sprite': () => Sprite,
'Text': () => Text,
'Video': () => Video,
'default': () => null,
}
return (modelTypes[Type] || modelTypes['default'])()
}
return getModel(snapshot.Type)
},
Button,
Checkbox,
Container,
Dropdown,
Embed,
Input,
List,
ListItem,
Radio,
Slider,
Sprite,
Text,
Video
)
)
}
const stage = types.model('Stage', {
children: types.late(ChildTypes)
})
@evanblack yeah, that might confirm my hypothesis about resource being spent over bubbling up actions, snapshots and patches. I am also not sure if maybe avoiding that long stack with a while loop at root may improve performances.
Good to see you here too @skellock! 🙌 Let's see how we can optimize this.
I will get sometime next week to look into this. Points noted @mweststrate @mattiamanzati
@sanketsahusoft @skellock Feel free to ping me over the mobx-state-tree gitter channel for any question! I will be a little busy next week, but I should have time to reply :)
Thanks for having a look at this guys. For my use case, I got around the performance problems by managing a compact vs expanded state (lazily setting deeply nested fields in MST). Obviously, won't work for everyone but it let me continue on and get my app deployed 🎉
Version 1.3.1 should be a lot faster. Closing this issue for now, although I don't doubt that many more improvements are still possible :). But in that case more measuring tests would be appreciated, so that we optimize the right thing :)
@mweststrate @mattiamanzati
I've been toying around with lazy init-ing MST nodes as my app started to take over 10s to initialize my MST store of nested hell and I came up with a hacky solution to my problem -- don't instantiate the node until something actually cares about its value.
This is still all synchronous and brought me back to the 100s of ms load time. It's making me wonder whether this would be a better default for MST.
An array with 50000 items with a single string property:
(full) init: 2976.90185546875ms
(full) read all items: 76.2509765625ms
(lazy) init: 140.288818359375ms
(lazy) read all items: 1957.93408203125ms
Would love to hear from you as to how to achieve this properly! Thanks!!
Here's my latest implementation, now running in production. Everything seems to be working so far.
import { extras } from 'mobx'
import { types } from 'mobx-state-tree'
export default subType => {
const BlankType = types.model({})
class LazyType extends BlankType.constructor {
constructor (opts) {
super(opts)
let instantiate = this.instantiate.bind(this)
this.instantiate = (parent, subpath, environment, value) => {
let node = instantiate(parent, subpath, environment, {})
node._value = value
return node
}
this.finalizeNewInstance = () => {}
this.createNewInstance = () => ({})
this.getChildNode = (node, key) => {
this.initNode(node).getChildNode(key)
}
}
initNode (node) {
let instance
let cd = extras.getGlobalState().computationDepth
extras.getGlobalState().computationDepth = 0
instance = subType.instantiate(
node.parent,
node.subpath,
node.environment,
node._value || {}
)
node.storedValue = instance.storedValue
node.type = subType
extras.getGlobalState().computationDepth = cd
return instance
}
getValue (node) {
return this.initNode(node).storedValue
}
getSnapshot (node) {
return node._value
}
isValidSnapshot (value, context) {
return subType.isValidSnapshot(value, context)
}
}
return new LazyType({ name: subType.name })
}
@dnakov so do you use this subtype to create your models?
@mshibl it creates a new type that is a lazy version of the given type.
const Store = types.model({
myActiveProp: MyActiveType
myLazyProp: lazy(MyActiveType)
})
I am little shocked to see the difference in performance between mobx and mobx-state-tree. I just transformed the models of a mobx application to mobx-state-tree only to find out I can't actually use it as it has to load thousands of objects in memory and I'm looking at > 10 sec boot time.
Mobx and mobx-state-tree can't be that much of difference until you actually use mobx-state-tree's functionality (when it should kick-in).
Would it make sense to opt-out some features of mobx-state-tree to make it more like basic mobx usage? For example, if a model is not likely to be changed I don't need patches etc. .
One could argue that you wouldn't need mobx-state-tree for these models but I would like to define my models in a simular way + I really like the way mobx-state-tree handles deserialization of json input.
@dnakov made a good example of lazy loading. However, if one were to loop trough the models for the initial view (for example to do some filtering), then the lazyloading wouldn't make much of a difference.
If the performance penalty during initialization is because of support for the mutation functionality of the objects then would there be a way to enhance the objects as soon as a first mutation is being performed?
@ConneXNL yeah unfortunately MST in its current form doesn’t scale past the simplest examples. I’ve been searching around for optimizations that could be made but mostly came to the conclusion that I’d have to nix most of its features for it to be usable for any semi-complex/big models. The lazy model gives me some time to rip MST out of my app, but like you said, wont work for other use cases
Would be great to get some guidance from @mweststrate or @mattiamanzati
When working with large amounts of data, this is often an indication that you are working with data points (or just values), and not things that will modify in themselves. In that case they are better modelled as frozen
then with a Model, which will make them must faster. (having typechecked frozen objects is something we want to add in the future!)
@mweststrate when using just frozen you will loose MST built-in functionality like references, views, and deserialization (in relation to references).
Also I am not sure if I understand what you mean with data points. Is a spreadsheet with cells something one would do with non-frozen types in MST?
Let's consider a simple 100x100 spreadsheet in MST and Mobx and compare the performance with arguably the exact same functionality:
Prepare some test data:
const data: any = {};
for (let x = 0; x < 100; x++) {
for (let y = 0; y < 100; y++) {
data[x + "," + y] = { x, y };
}
}
MST:
const MSTCell = types.model('Cell', {
x: types.number,
y: types.number,
value: types.optional(types.string, '')
}).views(self => ({
get cor() {
return self.x + "," + self.y;
},
get columnName() {
return this.x; // (for example number to alphabetic name)
}
})).actions(self => ({
setValue(value: string) {
self.value = value;
}
}));
const MSTSpreadsheet = types.model('Spreadsheet', {
cells: types.optional(types.map(MSTCell), {})
});
console.time("Initializing MST spreadsheet...");
MSTSpreadsheet.create({
cells: data
});
console.timeEnd("Initializing MST spreadsheet...");
// Takes about 2 seconds
Mobx:
class MobxCell {
public x: number;
public y: number;
@observable public value = '';
@computed get cor() {
return this.x + "," + this.y;
}
get columnName() {
return this.x; // (for example number to alphabetic name)
}
@action setValue(value: string) {
this.value = value;
}
}
class MobxSpreadsheet {
@observable public cells: Map<string, MobxCell> = new Map();
public constructor(data: any) {
for (let i in data) {
if (data.hasOwnProperty(i))
this.cells.set(i, Object.assign(new MobxCell(), data[i]));
}
}
}
console.time("Initializing mobx spreadsheet...");
const spreadsheet = new MobxSpreadsheet(data);
console.timeEnd("Initializing mobx spreadsheet...");
// Takes around 0.24 seconds
The above MST takes a whopping 2 seconds to initialize. Things can only get worse from there as you add more cells / make the Cell model more complex. While having almost the same functionality in place we have to take a 80-90% performance loss when moving from mobx to MST.
Can this issue at least be reopened? There's certainly a lot more that could be done and I think it may help new MST adopters see that performance is not there yet before they jump into it.
@connexnl, tnx for the clear example. did you also test with a production build? Could you put the above in a codesandbox? That would ease profiling!
@mweststrate another example is in yarn run speedtest
:
-------------------------------------------
Large Model - 0 small & 100 medium children
-------------------------------------------
918ms | x 50 | 18.4ms avg
4,600ms | x 250 | 18.4ms avg
911ms | x 50 | 18.2ms avg
(I ran this on the latest maxed out iMac)
@mweststrate I also seem to be finding quite a significant difference between creating a store in React Native and in browser with about 4500 items:
React Native: mst: 4417.558ms
Browser: mst: 721.56ms
Actually, I was just about to migrate a spreadsheet from mobx to mobx-state-tree, so here is a sample codesandbox for the spreadsheet example (100x100).
@carlosagsmendes I just changed the code sandbox to test with MST in production mode, which is 6 times faster, is that acceptable?
https://codesandbox.io/s/rw359zpmno
Initializing MST spreadsheet...: 743.399169921875ms
Initializing mobx spreadsheet...: 123.76220703125ms
Hi @mweststrate I've tried it out locally before creating the codesandbox in development and production mode (using create-react-app). Mobx gets faster in development mode. But I'm unable to see significant changes in Mobx-State-Tree performance between development and production.
@ConneXNL reported 2 secs for mobx-state-tree and 0.24 with Mobx. My times are faster but the proportion is similar so it may be due to different hardware.
Also, I was wondering if performance could be improved in this specific case by not creating all the observables upfront. Wouldn't it be possible to create a map with the cell values just for the cells that actually have data? I'll try this out and will share two examples with Mobx and Mobx-State-Tree tomorrow.
@carlosagsmendes did you check above sandbox? I changed MST to production mode on the fly, if you can't get the the same results (improvements) with a local project, it might be the case that there is a problem in the production mode setup (in either MST or your project)
@mweststrate how are you getting a 6x bump? I'm seeing 15-25% improvement
[Dev] Initializing MST spreadsheet...: 743.10107421875ms
[Prod] Initializing MST spreadsheet...: 645.926025390625ms
Sorry for not being clear @mweststrate, I can get the improvements with a local project with react create app. What I was trying to explain was that like @dnakov I wasn't able to get a 6x improvement.
@mweststrate If I am correct one can can trigger MST in production mode by setting process.env.NODE_ENV='production'
at the top of the sandbox file (after the imports or in a seperate file where you require the current index.js), then I am also seeing hardly any improvement between development and production mst/mobx.
Is ~1s acceptable for a 100x100 cel grid? For my use-case it's defenitely not as I need to support about 10k editable nodes and 90k read-only nodes (not even sure I will run out of memory using current MST) so if I were to use MST it has to be lightweight like mobx/class usage (which I implemented with success).
I can imagine that for my 90k read-only MST nodes I could have a light version of some sort of MST compatible type that only uses the views/reference functionality (which is kind of like the MobxCell - setValue action).
Because if every MST model type has to support every feature of MST and that feature is initialized on creating an instance of a type (without being able to opt-out/smart lazy load heavy features), then I doubt MST can be performant when loading tens of thousands of nodes in memory.
@ConneXNL I think your analysis is correct. 100K objects is definitely not what MST is designed for (see also: https://github.com/mobxjs/mobx-state-tree#when-not-to-use-mst). Even if we could fix performance, it would be memory wise very expensive because we bind all actions (the self
) thing. Which is developer friendly, but means an additional 100K closures per action. The same holds for many other features in MST as well. For example, by default (at least in non prod), it will typecheck every single instance to see if the json is valid etc etc. The impact of that increases quickly with these amounts of data.
With plain MobX you can choose to store actions on the prototype instead (which is the default, unless using action.bound
). Which is way more efficient. So for your use case, or at least the data type for which you have 100K instances, Mobx, with maybe something like serializr, sounds the much more efficient solution indeed.
@mweststrate is there any plan to support storing actions on the prototype for MST in the future, or would this not be possible due to the way that MST works? Alternatively, would you recommend just using a single action function that takes a sub-action name as an argument and call other statically-defined functions to reduce the number of closures per object instance?
@mweststrate while wrapping all actions is helpful, some might easily do without it (if it affects performance dramatically). Maybe make this behaviour optional/configurable instead?
Performance is becoming an issue for me too, particularly with React Native on Android where a Nexus 7 (2013) takes close to 16 seconds to restore the tree from snapshots (written to disk) and become interactive during a cold-start. The snapshots contain ~400 models in total and are spread across 8 files - one for each domain store in my tree. It's not a large tree by any means.
@wbercx even in production mode?
@skellock Even in production mode. Even when I don't render a FlatList ;)
@robinfehr @mweststrate, any thoughts on how we would create a prototype object for actions and where it would be stored? If not, I can try to look at the code to see how mobx is doing it and propose a solution.
Hi I'm migrating an app from mobx to mobx-state-tree and I'm facing some performance issues.
Here is the simplest example case:
https://mattiamanzati.github.io/mobx-state-tree-playground/#src=import%20%7B%20observable%20%7D%20from%20'mobx'%0Aimport%20%7B%20types%20%7D%20from%20%22mobx-state-tree%22%0A%0Aconst%20MSTStore%20%3D%20types.model(%7B%0A%20%20%20%20items%3A%20types.optional(types.array(types.string)%2C%20%5B%5D)%0A%7D).actions(self%20%3D%3E%20(%7B%0A%20%20%20%20setItems(items)%7B%0A%20%20%20%20%20%20%20%20self.items%20%3D%20items%0A%20%20%20%20%7D%0A%7D))%0A%0Aclass%20MobxStore%20%7B%0A%20%20%20%20constructor()%7B%0A%20%20%20%20%20%20%20%20this.items%20%3D%20observable(%5B%5D)%20%0A%20%20%20%20%7D%0A%20%20%20%20setItems(items)%7B%0A%20%20%20%20%20%20%20%20this.items%20%3D%20items%0A%20%20%20%20%7D%0A%7D%20%0A%0Aconst%20items%20%3D%20Array.from(Array(100000).keys()).map(x%20%3D%3E%20%60Item%20%24%7Bx%7D%60)%0A%0Aconst%20mstStore%20%3D%20MSTStore.create()%0A%0Aconsole.time(%22mst%22)%0AmstStore.setItems(items)%0Aconsole.timeEnd(%22mst%22)%0A%0Aconst%20mobxStore%20%3D%20new%20MobxStore()%0A%0Aconsole.time(%22mobx%22)%0AmobxStore.setItems(items)%0Aconsole.timeEnd(%22mobx%22)
Replacing an observable array with 100.000 items takes ~5sec when using MST and just 0.02sec when using mobx. The worst part is that it completely blocks the UI while doing so. Is there something I could do to improve speed?
EDIT: Running with
NODE_ENV=production
does not improve things too much.