Closed BrianHung closed 9 months ago
Ooh, I love it @BrianHung - thank you!
I like the screenshot with that performance data. What did you use to generate that from? Do you have any reproduction steps? It would be awesome if we can publicly document the improvement.
What did you use to generate that from?
I was performance profiling my own app which has an action to delete several thousand of cell nodes at once.
Do you have any reproduction steps? It would be awesome if we can publicly document the improvement.
Not sure if I can think of a good representative performance benchmark for the recordPatches
util.
All good @BrianHung! We do have some folders and test files that seem performance related, so I'm going to see if I can dig into what all is going on there. I'd like to be able to document exactly what/if any measured improvement we get out of this PR. Let me know if anything there jumps out at you - would be great to include that kind of data with this PR.
If you have some extra time, it would help out if you add a test for this specifically. It looks like we mostly implicitly test this across the codebase by using recordPatches
in a variety of places, but I'd love to have a test file for mst-operations
on its own.
@BrianHung What profiler are you using?
@jamonholmgren Chrome dev tools
I don't expect most people to run into performance problems with unshift, but we have a use case where we're trying to delete 20,000 nodes from a map; and there, push is noticeably faster than unshift.
If you have some extra time, it would help out if you add a test for this specifically. It looks like we mostly implicitly test this across the codebase by using recordPatches in a variety of places, but I'd love to have a test file for mst-operations on its own.
I think this util works pretty well for testing recordPatches
. It'd be basically what I would've written if a patch test case didn't exist.
Thanks, @BrianHung! We haven't updated the actual document yet, but I think we are going to move towards requiring new tests for contributions, so it would be awesome if you could put together another test file to exercise this specifically - even if it's somewhat duplicative of the test you highlighted here.
The value of adding a duplicative test like that is more in the external communication of the API: "we expect this thing to behave in a certain way, here is how it could be called and how we might expect it to behave". It'll also give us more mechanisms to ask ourselves if any future changes are technically breaking or not.
So turns out we have some existing benchmarks that you can run inside packages/mobx-state-tree
with yarn test:perf
.
Running that on master
gives me these results:
$ jest --testPathPattern=/__tests__/perf/
-----------
Small Model
-----------
11ms | x 100 | 0.1ms avg
93ms | x 1,000 | 0.1ms avg
549ms | x 10,000 | 0.1ms avg
105ms | x 1,000 | 0.1ms avg
11ms | x 100 | 0.1ms avg
------------
Medium Model
------------
21ms | x 100 | 0.2ms avg
220ms | x 1,000 | 0.2ms avg
976ms | x 10,000 | 0.1ms avg
176ms | x 1,000 | 0.2ms avg
20ms | x 100 | 0.2ms avg
------------------------
Large Model - 0 children
------------------------
63ms | x 100 | 0.6ms avg
396ms | x 1,000 | 0.4ms avg
58ms | x 100 | 0.6ms avg
-------------------------------------------
Large Model - 10 small & 10 medium children
-------------------------------------------
61ms | x 50 | 1.2ms avg
275ms | x 250 | 1.1ms avg
63ms | x 50 | 1.3ms avg
-------------------------------------------
Large Model - 100 small & 0 medium children
-------------------------------------------
131ms | x 50 | 2.6ms avg
970ms | x 250 | 3.9ms avg
138ms | x 50 | 2.8ms avg
-------------------------------------------
Large Model - 0 small & 100 medium children
-------------------------------------------
257ms | x 50 | 5.1ms avg
1,162ms | x 250 | 4.6ms avg
263ms | x 50 | 5.3ms avg
Done in 20.94s.
On this branch, I get:
$ jest --testPathPattern=/__tests__/perf/
-----------
Small Model
-----------
11ms | x 100 | 0.1ms avg
93ms | x 1,000 | 0.1ms avg
466ms | x 10,000 | 0.0ms avg
85ms | x 1,000 | 0.1ms avg
10ms | x 100 | 0.1ms avg
------------
Medium Model
------------
17ms | x 100 | 0.2ms avg
153ms | x 1,000 | 0.2ms avg
808ms | x 10,000 | 0.1ms avg
151ms | x 1,000 | 0.2ms avg
16ms | x 100 | 0.2ms avg
------------------------
Large Model - 0 children
------------------------
50ms | x 100 | 0.5ms avg
337ms | x 1,000 | 0.3ms avg
44ms | x 100 | 0.4ms avg
-------------------------------------------
Large Model - 10 small & 10 medium children
-------------------------------------------
50ms | x 50 | 1.0ms avg
248ms | x 250 | 1.0ms avg
59ms | x 50 | 1.2ms avg
-------------------------------------------
Large Model - 100 small & 0 medium children
-------------------------------------------
118ms | x 50 | 2.4ms avg
582ms | x 250 | 2.3ms avg
124ms | x 50 | 2.5ms avg
-------------------------------------------
Large Model - 0 small & 100 medium children
-------------------------------------------
237ms | x 50 | 4.7ms avg
1,050ms | x 250 | 4.2ms avg
236ms | x 50 | 4.7ms avg
Done in 19.82s.
If you want to compare, I used diff checker to see what the diff looks like. master
on left, this branch on right:
Pretty good gains there! Once we have some tests written up for this, I think we can merge and enjoy the improvements. Thanks so much @BrianHung.
Added some tests based off the json patches test.
What does this PR do and why?
Performance optimization which replaces reversedInversePatches and unshift, with inversePatches and push. unshift is generally slower than push.