theKashey / memoize-state

The magic memoization for the State management. ✨🧠
MIT License
329 stars 16 forks source link

Should detect whole state spreading #5

Open theKashey opened 6 years ago

theKashey commented 6 years ago

With a given code

const mapState = ({ page, direction, ...state }) => ({
  page,
  direction,
  isLoading: isLoading(state)
})

memoize state should not react to any state change, as it would.

Posible solutions:

  1. Currently, memoize react on key get, but it could react on key read. This will require all objects to be wrapped by something with valueOf/toString. They key will be added to tracking if someone "read" it, or "return" it. Could affect performance. Unstable.
  1. Track such situations and encourage a user to change the code
    • cast function to string, and search for spread operator (unstable)
    • detect then all keys of state used, and inform user of a potential problem. Easy to do, could be enabled by default in dev mode.
faceyspacey commented 6 years ago

warning in the console is really less than ideal. it basically is an issue similar to forgetting to do immutability, but maybe worse without the console warning.

i hope #1 is stable and performant.

theKashey commented 6 years ago

or, error in console. In our test environment any error colours everything in red. So it will be as useful as propTypes, and they are useful.

theKashey commented 6 years ago

PS: Please bump the library version, 1.2.0 could be much more performant than 1.1.6 due to new API from proxyequal and 1.2.1 works on IE11/Android.

faceyspacey commented 6 years ago

i put it to 1.2.1--is that ok?

ps. i also decided to add a 2nd arg for mapDispatchToProps based on our discussion. It doesn't sound like we have a strong enough argument in favor of pursuing that route yet, so it's better not to get people focused on the wrong thing. In the current implementation, it just binds an action creators object. I haven't been using this arg or mapDispatchToProps nor mapMergeProps for over a year now, since I only use redux-first-router's <Link /> component, which performs the dispatch. Correct me if I'm wrong, but mapDispatch and mapMergeProps doesn't offer any performance gains (in fact it may be worse)--it's just so in your component code you have less arguments to pass (sometimes none)? I remember I used to juggle these additional arguments to connect in order to make component code as simple as possible. If there are no perf gains (since the component must re-render anyway with the new action creator functions), this seems just like an unnecessary over-optimization to reduce a few characters and remember less arguments to pass (perhaps the only benefit is you can pass these functions down the component tree without having to pass props they will need as well [or pre-bind them in components]). What do you think? If so, then perhaps a modern react-redux API could just be 2 simple arguments. Here's the final implementation /w 2 args:

https://codesandbox.io/s/64mzx7vjmz?module=%2Fsrc%2Fconnect%2Findex.js

theKashey commented 6 years ago

Correct me if I'm wrong, but mapDispatch and mapMergeProps doesn't offer any performance gains (in fact it may be worse)

mapDispatch will be executed once, to get the names for props, and absolutely does not affect performance. Thats why you can always return "a new function" from mapDispatch and not forcing component to update.

mapMergeProps is rarely used, then you have to displatch something with props you just mapped. I personaly have 1-2 examples of this.

"performance" here is simple - you have to re-render component only if first part(mapStateToProps) got updated, rest - you dont care.

faceyspacey commented 6 years ago

Typically, a new doSomething reference will trigger an update every time props changes:

const mapDispatch = (dispatch, props) => {
     const doSomething = (arg) => dispatch({ type: 'FOO', payload: { arg, bar: props.bar } })
     return { doSomething }
}

...unless they execute mapDispatch itself on every click or tap. And then call doSomething with the latest props? And, like you said, they get the names in the beginning once. Is that how it works?

something like the following:

props.doSomething = (...args) => mapDispatch(dispatch, this.props).doSomething(...args)
<button onClick={() => props.doSomething(123)}>

Is that what you mean by "get the names"? It simply needs to know the names to make the above assignment to props.doSomething.

If this is true, then the performance is very good for mapDispatch and far better than passing props to components to be used as arguments to handlers.

theKashey commented 6 years ago

Take a look here - https://github.com/reactjs/react-redux/blob/master/src/connect/selectorFactory.js#L74 - there is 3 different types of change

    if (propsChanged && stateChanged) return handleNewPropsAndNewState()
    if (propsChanged) return handleNewProps()
    if (stateChanged) return handleNewState()

In the same time it tracks "source" of the change - if (mapDispatchToProps.dependsOnOwnProps)

But! mapStateToProps depends on state, and may depend on props mapDispatch, may depend on props, (but usually not) mergeProps depends on these two "sources" change, and, will be executed each time, to merge even unchanged results :(

I might be wrong, but look like if your dispatches depend on props - any memoization is fucked, as long it may always produce new functions.

Could be fixed

  1. Wrap dispatches by "proxies" (not the Proxies)

    const wrapDispatchers = (mapDispatchToProps, dispatch) => {
    const factory = normalize(mapDispatchToProps);// https://github.com/reactjs/react-redux/blob/master/src/connect/mapDispatchToProps.js
    const dispatchs = factory();
    const names = Object.keys(dispatchs);
    
    // create forward proxy
    return names.reduce( (map, key) => {
    // return a function, which on-call will execute factory once more and call the real action.
    map.key = (...args) => dispatch(factory(dispatch, ownProps)[key](...args)
    return map;
    }, {}); 
    }
  2. Still, have to use merge props. It should be called if ownProps or result from mapStateToProps changes.

But - one could remove "areStatesEqual" at all.

theKashey commented 6 years ago

Just found that internals of memoization are discoverable in React tools

screen shot 2018-03-13 at 9 19 16 am
faceyspacey commented 6 years ago

the only issue I see is this: the child component will re-render no matter what if the props changes. so it does not matter that we kept the reference returned from mapDispatchToProps equal. It perhaps only matters in terms of the implementation in this library, where quick shallow comparisons are iterated through.

I'm not seeing how this is preventing re-renderings in the immediate child, perhaps in the child's children it's nice they receive the same function reference.

theKashey commented 6 years ago

Maybe I've lost a thread, but you have shouldComponentUpdate for it?

faceyspacey commented 6 years ago

...i mean we can do anything for our own libs. im just trying to figure out what the exact thinking of react-redux is.

it seems the "optimization" is simply that if ownProps is not used, it won't call mapDispatchToProps every time,:

https://github.com/reactjs/react-redux/blob/master/src/connect/selectorFactory.js#L57 https://github.com/reactjs/react-redux/blob/master/src/connect/wrapMapToProps.js#L47

note, I think these lines are unnecessary since the logic is handled above: https://github.com/reactjs/react-redux/blob/master/src/connect/wrapMapToProps.js#L41-L43


So this means, when there are new props, there will be new handlers. Plain and simple. There is no optimization for that. But it could be possible if the parent updated, this:

https://github.com/reactjs/react-redux/blob/master/src/connect/selectorFactory.js#L29

and did this: shouldComponentUpdate() => false. Then the existing handler reference would have access to the new props, while the child component did not re-render.

They are not doing this though :)

Well, they/we couldn't do it generally anyway, as the child component needs those new props for its own purposes.

The react-redux library ultimately is doing very little with a whole lot of code. This is like the 4th time in several years that I've spent several hours analyzing it. It's not worth it. This can and should be far simpler. If you're memoize-state library can work in my demo like that, we have a very powerful solution. Nobody needs mapDispatchToProps in function form--BECAUSE THERE ARE NO PERFORMANCE OPTIMIZATION AS A RESULT OF IT! There's only worse performance from using a function here instead of an object of action creators. Even if it tries to maintain the same function references, it wouldn't matter because the child component will need to re-render anyway with the new props. That's why my demo + memoize-state may in fact be a complete solution right here, right now. No more react-redux. Provided, the new way of using the new context API to "broadcast" subscriptions is in fact high quality (and at least about the same speed as the old implementation).

faceyspacey commented 6 years ago

that's not to say, memoizing action creators is a good idea. lets forget about that altogether. ..im just trying to figure out what we really need to be a high quality abstraction.

my question simply was this:

- can we prevent re-renders by binding some props/arguments in connect vs. passing all as args in component code?

I don't think there is any performance difference (in the general case). The wrapped component must re-render in both cases.

THEREFORE, who cares about this signature: connect(mapState: Function, mapDispatch: Function)

all that matters is this one: connect(mapState: Function, mapDispatch: Object)

just like in the demo. if the performance gains hypothesis I just made can be disproved, we need to know. But if not, we can forget about mapDispatch in function form, and therefore we can forget about mapMergeProps. Cuz that's only needed if you use mapDispatch in function form.

Then we have a simple formula for react-redux-simple.

theKashey commented 6 years ago

Keep in mind - there is 2 things to solve

  1. Let the old code work. To much code was already written
  2. Evolve.

This means - one could try to make a better react-redux(just move your current experiment to the real repo), and also provide react-redux-compact to let the "old" users use the newish redux adapter.

This also means that there must be some "migration stories" about how to do "something" with old react-redux, and how to do the same with a new. And why the new is better. And it will take some time to make it better, as long there are a lot of stories we don't cover, we don't discuss yet.

faceyspacey commented 6 years ago

i dont care about react-redux or the old ways. im building my own Framework :)

the old ways are all garbage. if we followed the mainstream, we'd be composing everything until our apps collapse instead of developing far more fluidly with Redux-First Router. You're a user of that lib right?

Either way, the kung fu of Reactlandia is more "authentic" than what's main stream :) and there's a lot of new things coming. I'm just trying to collate all the optimizations of react-redux so none are missing. Basically, it has no crazy optimizations, just support for stupid use cases that make for an unnecessarily complex API, and which can be solved other ways.

Respond Framework will have components like this FYI:

const MyComponent => (props, state, actions) => aka const MyComponent => (props, state, events) =>

no HoCs necessary. lazily memoized "UI database" based on your reducers + all selectors-- this is what's in the state arg above. ..events is the mapDispatchToProps arg in object form, but containing all action creators of the whole app. forget all the hoc shit. ..state will use characteristics of what you're doing with memoize-state + Immer.

no more language about "action creators." same redux functionality. the overall concept: "what if we built React with a global state store in mind from the start?"

ps. RFR is about to have a big release that has been 6+ months in the making. It will be the foundation of Respond Framework.

"Don't just React, Respond" :)

Stage 2 is making Remixx the Redux-compatible replacement that provides the above simplified component api, + some stuff of MobX, plus "redux modules" etc. That's why I'm doing all this research. It may be something that makes sense for us to collaborate on. You have a lot of experience now with the proxy-based "access" strategies--that will be the next area I move into. Currently, I'm no expert in that area, but will need to be to make the above happen.

theKashey commented 6 years ago

So, then make a first step - extract the code from sandbox and found a new project, next to RFR.

faceyspacey commented 6 years ago

we still want to help the mainstream react-redux people. we can have both. it also can help find flaws in the approach like we've been doing. we simply keep both goals in mind.

theKashey commented 6 years ago

So you can create something more perfect for you, as long they have to maintain some compatibility. Keeping in mind, that they are just upgrading to react16/context API - the public API should be absolutely the same.

but I am always keen to taste something new, something different. And, yep, trying to fix the imperfect things.

faceyspacey commented 6 years ago

...so basically we just gotta make sure memoize-state is extremely powerful just for mapStateToProps in the general sense. i.e. like solving this issue for detecting "whole state spreading." I'll look out for other edge cases.

The UI database concept for "remixx" (which is our own library you mentioned) is this:

const reducers = combineReducers(allReducers) // we like normalized primary state, unlike MobX
const selectors = combineSelectors( {
   derivedState1: state => state.foo[state.bar],
   derivedState2: arg => state => state.foo[arg],
})
const uiDatabase = createRemixxStore(reducer, selectors, enhancer)

const router = createRouter(routesMap, options)

const { firstRoute } = createRespondApp(uiDatabase, router)

await firstRoute()
Respond.render(App, document.getElementById('root'))

or:

const { firstRoute } = createRespondApp(reducers, selectors, routes, options)
await firstRoute()
Respond.render(App, document.getElementById('root'))

and then your components look like this:

const MyComponent = (props, state, events) =>
   <div>
       <span>{state.derivedState1}</span>
       <span>{state.derivedState2(props.arg)}</span>
       <Link to={events.list({ category: 'respond-framework' })} /> // action creators generated from routesMap in RFR/Rudy :)
   </div>

FYI, the signature of MyComponent is enabled by a simple babel plugin that does this:

remixxConnect()(props => MyComponent(props.props, props.state, events))

And fyi, here's some capabilities of the routes map now:

const routesMap = {
   LIST: {
      path: '/list/:category',
     beforeThunk: ...,
      saga: ...,
      thunk: ...,
      observable: ...,
      graphql: gql`
        query Videos($category: String!) {
            videosByCategory($category: String!) {
               name
               youtubeId
           }
      }`
   }
}

So above are several concepts. The different keys on route such as saga and graphql is all stuff that's already built in Rudy/RFR. There is now an async middleware pipeline and you can define any callbacks you want. It's trivial to build sagas/observable/apolloGraphql/codeSplitting middlewares. All routes will have these callbacks if you provide the given middleware. So will the global options arg. And now there is also the concept of nested routes/scenes, so they will have those on enter of a given scene too. That allows for easy sharing of queries/dataFetching.

But in terms of our shared focus, derivedState1/2 is the focal point (I just wanted you to take a quick look at how the whole system is supposed to work; basically the main "work" is now in your routes map, and in all the "responses" it gives, which is why it will be called "The Respond Framework."

So basically, we want to setup our UI database once and only once, and in our components never worry about it again. It's SQL for your UI database, where normalization is the name of the game. Selectors are like MySQL views or more commonly how rows are merged/combined through foreign keys. So similar to a database, it's a structure/schema you setup once at the beginning.

Sure, there may be times where you add a selector just to use it in one component, but that's fine. The benefit is super clean components. Les composition, less hocs. None of this crap:

compose(withRouter, connect(mapState), graphql(etc), graphql(etc))(MyComponent), which gets very complicated very quickly.

You may be wondering how the Apollo/GraphQL aspect works--simple: we copy over the denormalized queries that your components receive as props to a reducer state. And then we watch the query, and make sure references stay the same to optimize re-rendering when mutations occur. So essentially you have state.apollo.videos or perhaps just state.videos if you choose not to nest apollo. i.e: rootReduer = (state, action) => ({ ...combinedReducers, ...apolloReducer(state, action) }). And that's it. Storing apollo props in redux state is obvious. Why don't we do it? Because nobody has a good routing mechanism for redux outside of RFR. You can manually apply this in your thunks today without the new version of RFR/Rudy. You just need the algorithm to maintain references in the reducer, as apollo queries always return new objects. The benefits are immense: time travel which apollo was never good at and basically does not have; you can now select a combination of standard redux UI state + apollo domain state via standard redux techniques; again, no hocs, cleaner component code; other reducers can listen to new data from apollo in simple denormalized action objects.


So back to the proxy stuff that this lib does well. Basically, the goal is to make a lazy UI database in terms of your selectors. Only when they are accessed, are they run, and watched. By "watched" i mean, now the component will re-render when it changes, similar to how selectors now are only run when attached to a live component instance. By default these selectors need to be instance-level memoized, or ideally be able to be shared as you mentioned in your "cascading" comment.

A unique aspect is this one: state.derivedState2(props.arg) ...So this is a core idea into making this a complete solution for a UI database that can be created once at the beginning, and not via hocs. Obviously reselect and react-redux's mapStateToProps also allows you to combine ownProps. So according to traditional logic, that means you have to create a selector for each component so that it can be aware of its props requirements. My current solution--possibly not the best/final solution--is that components can pass the selectors arguments at "render time." So ideally, these selectors are cached all via the same mechanisms as memoize-state. What I imagine is essentially something like:

derivedState2: arg => state => state.foo[arg]

becomes this:

derivedState2a: state => state.foo[arg1] // arg in closure, now we memoize based on just `state`
derivedState2b: state => state.foo[arg2]

I'm not exactly sure of the implementation, but the point we need memoize-state to memoize a functino that returns a function, and arguments on both the original and 2nd function. If one function can work, we definitely can somehow start watching arguments in both.

But, do we start watching for all these additional arguments forever? Or only when the corresponding component instances are alive? Of course the latter.


Here's the subscription mechanism implementation (pre the new context api):

So it's an inverse of how React/Redux used to work, where each component instance listened to every store update. Instead, component instances notify the store what they want to listen to (using the same techniques as memoize-state/immer), and they only get messages to update when they should. No more comparisons in the HOC.


Now this concept post new context API must be a bit different. We could still do the above implementation, but apparently that would lead to "tearing" in React Async due to time slicing. I.e. components might be trying to access dead state, and error on undefined.

Here's a quick way to do it off the top of my head:

Basically our top level provider will broadcast to all instances. Then each instance reads the broadcasted message, and if the keys they need are in fact changed, then it updates, otherwise shouldComponentUpdate returns false.

That's a bit more "work" that could affect perf but not much. Perhaps it has some other benefits.


Another option of course is to always run all selectors on each state change. That would be unnecessary work, if only some of them are used. By taking this concept, we could simplify the above algorithm so that component instances never need to notify the store of what keys it's watching. Rather, we keep that logic all in the babel-transpiled component hoc (remixConnect) and since it receives all broadcasts, it simply detects if the changes include what it needs.

The one downside that unfortunately makes this impossible is that some of our selectors now take arguments. It's a dynamic database unfortunately.


One of the overall concepts of this approach is that there is likely less comparisons, as instead of doing it on every nested component (perhaps redundantly), we we do it once at the top level. If "tearing" wasn't an issue, it would be great, because then the store directly tells individual components to update with no more comparisons at the component level. I really hope Fiber is able to broadcast these messages and let components short-circuit via shouldComponentUpdate quickly. It's not the biggest part of the concept here--it just would be nice to have some extreme performance gains in this area, since the "object access" strategy has a few perf costs of its own.

Combining similar/redundant work of both techniques is not what we want.


So anyway that's the overall concept of Remixx. A React normalized (and parameterized) database you setup once separately from your components, with a streamlined obvious component API that allows you to code most of your app via component functions, and with no/minimal hocs.

The net result is our components become templates like the old days. Except with a modern twist: they are reactive client-side beyond a single render. It's all just React. All of React capabilities is still available to you. But for the majority of your app you just emit events and respond with data (from your routesMap), and render from these params/props/vars in your components.

Another core notion: it's "evented React." These templates do nothing but standard design logic + emit events. This is different than what you will be doing with apollo and what we see in a lot of apps, where you're defining queries right along side your components. In this system, these are dumb templates, and the smartest thing they do is emit/dispatch a simple event object from time to time. The "Respond Way" is to do no more than this; leave it to your routesMap (+ UI database) to handle all the real work.

It's going back to the good old days of server-side MVC. It's an architecture, whereas now we have none (just infinite composability). The achiles heal of this system is "Redux Modules"--we'll supply this feature as well via a babel plugin that essentially prefixes reducer + actionType names. You'll essentially create a folder /modules/MyReduxModule that exports a component, its reducers + routes (which includes its actions, thunks, sagas, etc). It can be imported to other apps, or used in multiple places in the same app, or in multiple platforms you're targeting. So in "Respond Framework" world we have the concept of shareable global-state-driven-components. That issue no longer exists. It's no longer a question of "you may not need need redux." This is the most powerful way to develop, period, and it's completely modular just like regular components. You plug these "remixx modules" into your app and, boom, you see their state appear in the Redux Devtools. Here's what one key might be called:

state = { category, pages, moduleName-09egui5fdg094 }

That key is guaranteed to be unique like mongoIds (one in a guzzilion). You don't wanna see that long key postfix in your devtools? Well, our business model is we're making our own devtools, and selling those FYI. Monthly fee $15/month, and our devtools will be 1000X better than the free one in the chromestore. ..we'll hide the postfix key if you don't want to see it :) ..among many many other features I have in mind.


RFR is already the most fluid way to develop React/Redux apps. There are many use-cases it doesn't readily solve. The next version which will be out in 1-2 weeks has 75X more capabilities. It's 51% of the work that makes up the Respond Framework. Remixx is just sugar.


The final sugar cube is "react-modern" which is this:

const MyComponent = (props, state, events) => 
  div([
      MyComponent2({ foo: 123 }),
      MyComponent3({ baz: 22 }, span('hello')) 
   ])

No more JSX. No more HTML-like crap. But not via the old way of createElement(MyComponent2). Now you can just use them directly as the functions they are named as. The babel plugin will handle this. Along with Glamorous, that means nothing but JS.

The way you teach students [from scratch] is:

"Code is just a list of what you want done, like a recipe:

- buy eggs
- buy milk
- mix

In code that would be:

buyEggs()
buyMilk()
mix()

But what does "mix" do? There's multiple ways to make an egg breakfast. Well, picture a multi-tired to do list:

- buy eggs
- buy milk
- mix
    - beat eggs
    - pour milk
    - fry in pan

How would this look in code:

buyEggs()
buyMilk()
mix()

function mix() {
   beatEggs()
   pourMilk()
   fry()
}

So functions can simply be made up of other functions. Your to do list becomes a christmas tree of function calls. As it turns out, this happens to be very similar to how web pages and essentially anything visually is displayed on screens. It's just "boxes within boxes." Function calls within function calls. This is exactly what React is made of. Therefore by learning this technique you can go from knowing zero javascript to knowing React in record timing.

Lets pretend our multi-tiered to do list is in fact a plan for a painting:

- Frame
   - Canvas
        - Horses (2 stallions)
        - Barn
            -Door
            -Window
        - Grass

In React it looks almost identical:

Frame(
   Canvas([
       Horses({ kind: stallions, number: 2 }),
       Barn([ Door(), Window() ]),
       Grass()
   ])
)

As you can see, it's just specialized syntax to represent the same thing you would write on a yellow post-it note if you were trying to be extra consistent in your formatting.

But more importantly, the design of apps and websites is nothing more than planning out all the tiers (called "nesting") of a painting!

With one difference: digital paintings are interactive and change over time. How might we address that here. Let's imagine the horses are either in the painting or not in the painting (depending on if their owner took them out for a walk). And to simplify, let's zoom in on the "nesting" and pretend we can only see the "Canvas" level. This brings us to the "Canvas function":

function Canvas(props) {
   if (props.ownerOnWalk) {
     return [
       Barn([ Door(), Window() ]),
       Grass()
    ]
   }

   return [
       Horses({ kind: stallions, number: 2 }),
       Barn([ Door(), Window() ]),
       Grass()
   ]
}

And we use it like so:

Frame(
   Canvas({ ownerOnWalk: true })
)

But how do we trigger this event?

function Frame(props, state) {
   return [
     Canvas({ ownerOnWalk: state.onWalk }),
     button({ onClick: goForWalk })
   ]
}

Where did "state" come from?

State is a 2nd argument that "Respond" automatically passes for you. You only have to ever concern yourself with a single props argument.

If you're an advanced user, you probably have recognized there is advanced forms with arrays for children, and props can be in fact anything other than a plain object and become the value of children. etc

Ok, but what about "goForWalk" and how did state.onWalk come into existence? Here's the more realistic implementation:

function Button(props, state, events) {
   return button({ onClick: events.goForWalk })
}

when you create your respond app, it creates both the state and events object for you and passes it to all functions in addition to props!

This means your component code can be very simple, yet very capable at the same time. If you can paint a painting, you can paint a Respond/React app. Just emit events in a few places. It's as if you pointed to a part of your painting (like the barn door) and gave it magic capabilities.


Anyway, I could go on and on, and start explaining reducers. This is a concept a long time in the making. It's not hyper app. It's not Elm. It's React with an MVC architecture, and some "sugar" to make it extremely accessible, while undoing the "component everything" obsession that no longer serves us.

One last thing: you can do state.onWalk = true like MobX. We won't have MobX State tree, but just a basic "atom" you can alter, so learners don't have to choose between MobX and Redux. I personally like Redux because of normalization and enforcing you to keep high quality structure (as I think you do too). But it should let you use a mutation api if you choose. In fact, your routes essentially become your reducers for these users:

const routes = {
   LIST: { 
      thunk: ({ state, action }) => {
            state.onWalk = true
            state.category = action.category
       }
    }
}

Lastly, we will also have the alternate style of reducer:

const routes = {
   LIST: { 
       reducer: (state, action) => ({ onWalk: true, category: action.category })
    }
}

You can mix these with regular reducers that can only touch one slice of state. If you have both, the regular reducer "wins" because it has a stronger focus on the given state key.

So that's why Remixx has 2 Xs: RemiXX

Redux + MobX 2 types of reducers combineReducers + combineSelectors

modules etc

:)

theKashey commented 6 years ago

It took some time to read and understand all of this. From one point of view - looking promising, from another - a bit extreme. I need to see this in action. And I know - a lot of people are jsx haters all will fall into the new syntax.

Regardless of the issue itself - I've found a way to solve it, but it will took some time to make it work on the systems without Proxies.

faceyspacey commented 6 years ago

Is the Proxy polyfill not a full solution? I thought we can just paste the js and it works like the Promise or fetch polyfill. To be a bit honest I was surprised it existed—I thought proxies were something that couldn’t be polyfilled. Enlighten me.

theKashey commented 6 years ago

There are 2 polyfills, one is simple - could set a trap on set/get/call. Could be done via object.defineProperty. Second is much more complex, and also patches Object methods, to work with "fake" polyfill.

Unfortunately, I've failed to solve this task. (for now, looking other ways)

babel-plugin-transform-object-rest-spread destroys the proxy. It just gets all the props outsite "the state", and stores in another object, already not controlled by me. I could count or "all" methods as used, or all as not used. But I also could trigger on "enumeration"/spreading.

theKashey commented 6 years ago

As long there is no way to distinguish object spread and deep-equal - that's really no way to solve this. Solution made:

  1. proxyequal adds a hook to ownKeys injecting the "enumeration trap" then someone going to enumerate object. Not affecting "normal" objects, and production. In case of access to that object proxyequal will inform about enumeration, and memoize-state provide a link to a function.

Solution tried:

  1. Via ownKeys add the "enum-start" and the "enum-end" guard keys. On access the key during enumeration - return something like {[Symbol.toPrimitive]:....}, deferring key access recording. But, if no one will cast "result" to primitive type - it will not work. And nobody will.
  2. During the spread - record all primitive types, but not objects, as long access into objects are still tracked. This will fail situation like state.values ? state.oneKey : state:anotherKey or deep/shallow-compare.

Followup - let user disable this notification, or enable only for top-level keys.