Palindrom / JSONPatcherProxy

ES6 proxy powered JSON Object observer that emits JSON patches when changes occur to your object tree.
94 stars 14 forks source link

Reflect to proxy object, to observe changes made by setters (rebased) #44

Closed warpech closed 4 years ago

warpech commented 5 years ago

This is PR #41 rebased on current master.

warpech commented 5 years ago

It still needs remaining work as explained in https://github.com/Palindrom/JSONPatcherProxy/pull/41#issuecomment-505009229. I will do it and present here for review.

warpech commented 5 years ago

@tomalec please review the commit in which I make it configurable to use Reflect API or not.

I developed 3 modes of operation:


From my benchmarks in Chrome and Firefox, "auto" is always faster than true:

Observe and generate, small object (JSONPatcherProxy - assignment)
 Ops/sec: 76168 ±3.96% Ran 5520 times in 0.072 seconds.
Observe and generate, small object (JSONPatcherProxy - reflection)
 Ops/sec: 66270 ±4.19% Ran 4989 times in 0.075 seconds.
Observe and generate, small object (JSONPatcherProxy - auto)
 Ops/sec: 73282 ±4.24% Ran 5366 times in 0.073 seconds.
Observe and generate (JSONPatcherProxy - assignment)
 Ops/sec: 1868 ±2.68% Ran 133 times in 0.071 seconds.
Observe and generate (JSONPatcherProxy - reflection)
 Ops/sec: 1878 ±2.59% Ran 129 times in 0.069 seconds.
Observe and generate (JSONPatcherProxy - auto)
 Ops/sec: 1886 ±1.71% Ran 203 times in 0.108 seconds.
Primitive mutation (JSONPatcherProxy - assignment)
 Ops/sec: 521795 ±0.95% Ran 26891 times in 0.052 seconds.
Primitive mutation (JSONPatcherProxy - reflection)
 Ops/sec: 435184 ±0.41% Ran 22308 times in 0.051 seconds.
Primitive mutation (JSONPatcherProxy - auto)
 Ops/sec: 482910 ±1.19% Ran 25780 times in 0.053 seconds.
Complex mutation (JSONPatcherProxy - assignment)
 Ops/sec: 13238 ±0.52% Ran 683 times in 0.052 seconds.
Complex mutation (JSONPatcherProxy - reflection)
 Ops/sec: 8932 ±1.00% Ran 461 times in 0.052 seconds.
Complex mutation (JSONPatcherProxy - auto)
 Ops/sec: 11083 ±1.66% Ran 583 times in 0.053 seconds.
Serialization (JSONPatcherProxy - assignment)
 Ops/sec: 2053 ±0.98% Ran 106 times in 0.052 seconds.
Serialization (JSONPatcherProxy - reflection)
 Ops/sec: 2060 ±0.86% Ran 106 times in 0.051 seconds.
Serialization (JSONPatcherProxy - auto)
 Ops/sec: 2080 ±0.31% Ran 106 times in 0.051 seconds.

In this case, the 3 modes of operation seem unneccessary. I could simplify the config and leave just 2 modes of operation:

WDYT?

tomalec commented 5 years ago

😎 Looks to good to me true ;) have you checked how this change degrades performance of no reflection vs. no reflection - How the additional function call in no-setter scenario slows down the operations?

tomalec commented 5 years ago

Ok, I found a scenario when "auto" fails to detect setters:

class Foo{
    set prop(newVal){
      console.log('setter called on ', this,'.prop:',newVal);
    }
};
class Baz extends Foo{};
obj = new Baz(); 

Object.getOwnPropertyDescriptor(obj, 'prop'); // undefined

That's why I think we need three ways.

Also, I'm not so convinced to true, false, "auto" instead of true, false, undefined and the cost of comparing on every operation (useReflection === "auto") instead of checking for more primitive type.

warpech commented 5 years ago

😎 Looks to good to me true ;) have you checked how this change degrades performance of no reflection vs. no reflection - How the additional function call in no-setter scenario slows down the operations?

Do you mean checking this kind of micro-benchmark? http://jsbench.github.io/#0bd655b5edb3f5b2560c8031c21332c3

I am not sure this is a bottleneck at all. Of course I could have completely different trap for all 3 modes, but that would make the code harder to maintain.

Ok, I found a scenario when "auto" fails to detect setters:

Great catch!

I guess we would need getPropertyDescriptor, a method that is not part of the standard and looks quite slow: https://gist.github.com/WebReflection/3373484, my bench: http://jsbench.github.io/#08ad9b3ff64f2dd2bbbe045f82aaf21b

I guess that this undermines the reason for having the auto mode. Should I change the auto mode to be that slow or just remove the auto mode? I think I should remove it.

warpech commented 5 years ago

Also, I'm not so convinced to true, false, "auto" instead of true, false, undefined and the cost of comparing on every operation (useReflection === "auto") instead of checking for more primitive type.

And I am not convinced that we should choose our API based on nanobenchmarking, even if there is a difference (but seems there isn't: http://jsbench.github.io/#b9f6574b6478e2a9a02ef917c6382ac1).

Choosing the API is one thing, optimizing the execution is another thing.

tomalec commented 5 years ago

Do you mean checking this kind of micro-benchmark?

I was thinking about just running our bench suite before and after this PR on the same env.

I agree we should not put micro-benchmarking over code maintenance, but if we will face more perf issues in real scenarios, that's a place to check and profile.

Should I change the auto mode to be that slow or just remove the auto mode? I think I should remove it.

On my machine, your perf for "getOwnPropertyDescriptor vs getPropertyDescriptor" looks better than "Direct assignment vs assignment in a function".

Thinking of YAGNI, as the application author, usually I know whether I used setters or not. Especially, if in long term, we would allow different strategies for different subtrees (or different Palindrom instances for different apps/partials), then the autodiscovery seems not so important sugar.

If the (precise) auto discovery is significantly slower that always reflect, then for sure we should remove it.

warpech commented 5 years ago

Ok, I found a scenario when "auto" fails to detect setters ...

This is now fixed.

warpech commented 5 years ago

Please find the current benchmark below. There are 3 groups of benchmarks: _updateValueByAssignment (forced assignment), _updateValueByReflection (forced reflection) and "auto".

For each group, the base value is provided for the current master branch (ec7b9bf). What really matters is the % value in the parenthesis.

Based on the result in Node, I think that "auto" still is very promising and in fact could be made not only the default, but actually the only available option. At the worst case of complex mutation, it is only 9.5% slower than current master. @tomalec WDYT?

_updateValueByAssignment

Observe and generate, small object (JSONPatcherProxy - assignment)
 ec7b9bf: 136720 Ops/sec
 92da649: 135498 Ops/sec (0.9% worse)
Observe and generate (JSONPatcherProxy - assignment)
 ec7b9bf: 2762 Ops/sec
 92da649: 2777 Ops/sec (0.5% better)
Primitive mutation (JSONPatcherProxy - assignment)
 ec7b9bf: 781852 Ops/sec
 92da649: 793861 Ops/sec (1.5% better)
Complex mutation (JSONPatcherProxy - assignment)
 ec7b9bf: 11808 Ops/sec
 92da649: 12349 Ops/sec (4.6% better)
Serialization (JSONPatcherProxy - assignment)
 ec7b9bf: 1719 Ops/sec
 92da649: 1717 Ops/sec (0.1% worse)

_updateValueByReflection

Observe and generate, small object (JSONPatcherProxy - reflection)
 ec7b9bf: 136720 Ops/sec
 92da649: 115118 Ops/sec (15.8% worse)
Observe and generate (JSONPatcherProxy - reflection)
 ec7b9bf: 2762 Ops/sec
 92da649: 2797 Ops/sec (1.3% better)
Primitive mutation (JSONPatcherProxy - reflection)
 ec7b9bf: 781852 Ops/sec
 92da649: 597516 Ops/sec (23.6% worse)
Complex mutation (JSONPatcherProxy - reflection)
 ec7b9bf: 11808 Ops/sec
 92da649: 8165 Ops/sec (30.9% worse)
Serialization (JSONPatcherProxy - reflection)
 ec7b9bf: 1719 Ops/sec
 92da649: 1729 Ops/sec (0.6% better)

_updateValueByAuto

Observe and generate, small object (JSONPatcherProxy - auto)
 ec7b9bf: 136720 Ops/sec
 92da649: 136351 Ops/sec (0.3% worse)
Observe and generate (JSONPatcherProxy - auto)
 ec7b9bf: 2762 Ops/sec
 92da649: 2793 Ops/sec (1.1% better)
Primitive mutation (JSONPatcherProxy - auto)
 ec7b9bf: 781852 Ops/sec
 92da649: 781270 Ops/sec (0.1% worse)
Complex mutation (JSONPatcherProxy - auto)
 ec7b9bf: 11808 Ops/sec
 92da649: 10692 Ops/sec (9.5% worse)
Serialization (JSONPatcherProxy - auto)
 ec7b9bf: 1719 Ops/sec
 92da649: 1719 Ops/sec (no difference)
tomalec commented 5 years ago

It still confuses me how come, your auto is faster than Reflectwhich should conceptually do the same thing but in native code. Could you add a setter to objects we are using in benchmarks?

warpech commented 5 years ago

It still confuses me how come, your auto is faster than Reflectwhich should conceptually do the same thing but in native code. Could you add a setter to objects we are using in benchmarks?

Rather, I made a microbenchmark for the reflection method that I am using: http://jsbench.github.io/#974afc7ca9d3c7f6c79311a2221cf0ad

Bottom line is that Reflect.set is always slow. Due to some optimizations, getPropertyDescriptor + Reflect.set (snippet 4) is faster than Reflect.set (snippet 3), but only in Chrome. Firefox does not have that optimization.

image

image


The above screenshots show that Chrome is anyway much faster than Firefox. Maybe then let's remove the questionable optimizations, and use only _updateValueByReflection to make Firefox faster and the code simpler?

warpech commented 4 years ago

Closing because this PR turned out to be not needed