Closed migueloller closed 5 years ago
If you're using produce
inside a produce
call, you don't need purity. The outer produce
call guarantees purity to Redux. Am I missing something?
@aleclarson, we still want to keep the reducers exported from files pure. This is actually recommended in the README at the end of the Redux example.
The issue happens when we try to wrap counter
with redux-undo
like this:
// counter.js
...
-export default produce(reducer, initialState)
+export default undoable(produce(reducer, initialState))
undoable
assumes purity, but produce
will not "purify" it if you happened to compose this reducer in the body another reducer's produce
.
Thoughts on a produce.pure
function that guarantees purity no matter where it's called from?
Just know that you take a performance hit (the effect depends on the context). If you're passing the same draft to 3 nested producers, the draft will be re-created 3 extra times. I assume you're aware of that, though.
// curried pure producer
export default produce.pure(reducer, initialState)
// inline pure producer
export default function(state: any) {
return produce.pure(reducer, state)
}
@aleclarson, I like the produce.pure
idea!
And yes, but for our case, if we implement batching it reduces the number of produce
calls from n
to m
where n
is the number of actions batches and m
is the depth of reducer composition (i.e., amount of nested producers), which I think will be negligible in most applications.
The main motivation for this whole thing is that we wanted to batch expensive and unnecessary produce
calls. We dispatch a very large amount of actions (i.e., thousands in a single event handler that should run in at most 16ms), each calling produce
when received by a reducer. If we batch all the actions and fire them all at once as the payload of a single action then we can all of those expensive produce calls. We want to do this:
// counter.js
...
-export default produce(reducer, initialState)
+export default undoable(produce(enableBatching(reducer), initialState))
As you can see, if we put enableBatching
outside like this undoable(enableBatching(produce(reducer, initialState)))
when enableBatching
loops through the batched array of actions and calls reducer
, it will call produce
actions.length
amounts of time.
EDIT: enableBatching
doesn't assume purity, it works with pure and impure reducers. All it does is handle the batching action, loop throughaction.payload.actions
and call the reducer that was passed in.
import {pure as produce} from 'immer' // vs import {produce} from 'immer' produce.pure()
I personally prefer import { pure as produce } from 'immer'
. Which one do you like best?
producePure
is probably the most correct.
@aleclarson, that would work too! Would you want me to submit a PR?
Yeah, give it a go. I've no need for this feature, currently.
@aleclarson,
So... in my search for finding the right way to do this I found clarity, and a potential bug in immer.
After much thought, it dawned on me that producePure
isn't necessary. Using the new original
feature it should be possible to accomplish what I wanted.
// counter.js
...
-export default produce(reducer, initialState)
+export default (state, action) => produce(reducer, initialState)(original(state) || state, action)
I could even make my own producePure
.
// producePure.js
// NOTE: This is a naive and incomplete implementation that just serves as an example.
const producePure = (state, recipe) => produce(original(state) || state, recipe)
If it can be done in userland, then I think it's not needed as a feature.
But... unfortunately redux-undo
still wasn't working. And after going in deep, I realized that the reason it wasn't working was not because it assumed immutability. Or maybe that was just half the story. The other half is what I believe to be a bug in immer.
redux-undo
does a comparison by reference on the history
state it keeps. It looks something like this: history._lastUnfiltered === history.present
. Now, I checked to see if immer kept referential integrity of proxies. And indeed, state.present === state.present
when state
is a proxy. In the case of different fields though, this wasn't the case. Interestingly enough, though, the following happens:
history._lastUnfiltered === history.present // false
original(history._lastUnfiltered) === original(history.present) // true
If the target of the proxies are the same, the proxies themselves should be equal by reference. At least if immer is properly keep track of its proxies. In this case it seems like it's not.
Am I missing something here? Is this a bug? Would it be possible to use a WeakMap
to make sure that this is an invariant. It would seem that this type of guarantee would be essential. i.e., if original(a) === original(b)
then a === b
.
EDIT: I looked more into it and it does seem that what's being used is just an object to keep track of proxies. This seems to be a bug and could be fixed by using a WeakMap
where the key is the actual object?
You want produce
to track which objects have been proxied to ensure two proxies are never created for the same object? Seems reasonable.
We'll need to reuse the "draft identity" between proxies, but not the "draft state". When reusing a draft identity, we'll need to remember the previous draft state, so it can be used again when the newer draft is finalized.
Map
should be used, because it can be shimmed in ES5.
That's a nice producePure
implementation. It's so tiny, I don't see why it shouldn't be in Immer by default.
@migueloller this issue is becoming hard to follow, with several ideas being floated and retracted. I suggest to:
producePure
. Which looks like a good idea, although the immediate need seems to be gone, correct?original
solution I couldn't really follow either, that means that you would be applying the producer to the original state, so discarding any changes that have been made before?your original solution I couldn't really follow either, that means that you would be applying the producer to the original state, so discarding any changes that have been made before?
Oh, that's a good point. Looks like producePure
is harder than just combining original
with produce
. You'll need to proxy the draft copy.
@migueloller Argh, wrote a reply, but github seems really unreliable at the moment. So, shortened summary:
original
implementation entirely, as it discards any changes already made to the draft when invoking the subproducer?producePure
sounds like a neat idea, but the immediate need has gone if I understand it correctly?@aleclarson
You want produce to track which objects have been proxied to ensure two proxies are never created for the same object? Seems reasonable.
That is what Immer should do already (unless the state is not a tree)
You want produce to track which objects have been proxied to ensure two proxies are never created for the same object? Seems reasonable.
That is what Immer should do already (unless the state is not a tree)
This sandbox shows the issue nice and clearly:
https://codesandbox.io/s/xj8rvlxo34
I don't get the
original
implementation entirely, as it discards any changes already made to the draft when invoking the subproducer?
This is a good point. Here's my solution:
For the inner produce
call, we would re-use the draft
instance for referential identity, but replace the base state with draft.copy
(to preserve any changes made to draft.base
by the outer produce
call). Once the inner produce
call finalizes, the draft.base
is reverted and draft.copy
is not reverted.
producePure
sounds like a neat idea, but the immediate need has gone if I understand it correctly?
He still needs purity for redux-undo
I think.
@aleclarson i am not sure whether your example is a good example, would rather have expected:
produce(base, draft => {
assert(original(draft) == base);
produce(draft, draft2 => { // pass draft, not base!
assert(original(draft2) == base);
assert(draft == draft2); // succeeds!
});
});
Your example just starts another producer on the same original object, and that draft is not the same, which is looks consistent and any other behavior would be more surprising? Take for example the following:
const base = { x: 3 }
produce(base, draft => {
draft.x = 2
produce(base, draft2 => {
console.log(draft2.x) // would expect 3 here, base didn't change after all!
});
});
Yes, but when the inner produce
call is producePure
, the draft
loses strict equality with draft2
(which seems to be the problem). https://codesandbox.io/s/6xk0jxmw7r
Because we can't treat a nested producePure
as an IIFE like we can produce
.
@aleclarson,
You want produce to track which objects have been proxied to ensure two proxies are never created for the same object? Seems reasonable.
Yes!
That's a nice producePure implementation. It's so tiny, I don't see why it shouldn't be in Immer by default.
That's fair. Once the issue with references is fixed, it should be pretty straightforward to add producePure
.
@mweststrate,
I don't get the original implementation entirely, as it discards any changes already made to the draft when invoking the subproducer?
That seems to be right! It seems that in this case it works because when composing reducers, the slice of the state that is passed wouldn't have been modified so original(state.slice)
will be fine to pass down. If this slice of state had been modified, the reducer would receive an old value.
Can you open a separate issue for the reference question in your last comment, I don't get why that condition should hold. If in the draft either of those fields was changes, it is correct that they are not equal (anymore). It looks orthogonal to this issue. Please include an example with what you observe versus what you would expect.
Yes I can! As soon as I get the chance I'll set up an issue that explains the desired behavior together with a reproduction.
producePure sounds like a neat idea, but the immediate need has gone if I understand it correctly?
Yes, the immediate need is gone since I can implement producePure
(at least one that works with composing reducers) in userland using original
.
Re https://github.com/mweststrate/immer/issues/225#issuecomment-431822765:
Maybe this?
const base = { x: 3 }
produce(base, draft => {
draft.x = 2
produce(copy(draft), draft2 => { // copy(draft).x is indeed 2, but original(draft).x is still 3
console.log(draft2.x) // would expect 2 here, since we have to underlying copy
assert(copy(draft) === copy(draft2)) // succeeds since we haven't mutated draft2 yet
});
});
This would be useful as then we could do:
const purify = (impureReducer, initialState) => (maybeDraft, action) => produce(impureReducer, initialState)(copy(maybeDraft), action)
const pureReducer = purify(impureReducer, initialState)
Does this make sense?
@migueloller ehh no. What is copy
, where does it come from and why would copy(base).x
be 2? Do you mean copy(draft)
?
@mweststrate, yes I mean copy(draft)
. I've updated the comment to reflect that! 😅
And for copy
, it's like original
but instead gives the copy tracked by the proxy if the draft has been modified. Just an idea.
How is that different than passing just the draft
@migueloller? Or you mean a prematurely finalized draft?
I have the impression that I kinda lost track on the actual issue, but I have the impression that the root problem is that you use produce
outside the undoable
? Which makes sense that it doesn't match as I assume the last one either wants to diff with the 'final' state, or keeps track of the original state? A producer is local, temporal mutability, so it is fundamentally conflicting with concepts that want to reason about time, and sounds like just pushing producers inward should fix the issue?
@mweststrate: producePure sounds like a neat idea, but the immediate need has gone if I understand it correctly?
@migueloller: Yes, the immediate need is gone since I can implement producePure (at least one that works with composing reducers) in userland using original.
Isn't this only safe in Redux when producers avoid deriving from state mutated by other producers?
let base = {}
produce(base, draft => {
draft.foo = producePure(draft, reduceFoo)
draft.bar = producePure(draft, reduceBar)
})
function producePure(state, reducer) {
return produce(original(state) || state, reducer)
}
In the above example, the reduceBar
function must not access draft.foo
or it will be using an outdated value.
This all assumes that draft
has its base state re-used for reduceFoo
and reduceBar
, instead of re-using the in-progress copy that's been mutated by the outer produce
call (using reduceFoo
).
@mweststrate,
How is that different than passing just the draft @migueloller? Or you mean a prematurely finalized draft?
If the draft is passed in, produce
will be a no-op, so yes, one could call it a prematurely finalized draft.
@aleclarson,
Isn't this only safe in Redux when producers avoid deriving from state mutated by other producers?
That is correct.
@mweststrate,
I have the impression that I kinda lost track on the actual issue, but I have the impression that the root problem is that you use produce outside the undoable? Which makes sense that it doesn't match as I assume the last one either wants to diff with the 'final' state, or keeps track of the original state? A producer is local, temporal mutability, so it is fundamentally conflicting with concepts that want to reason about time, and sounds like just pushing producers inward should fix the issue?
Unfortunately, pushing the producers inward greatly slows down our application since produce
calls are expensive. We batch our actions, thousands per frame, and if we call our reducer 1,000 times, that's 1,000 produce calls.
For this reason, we want to do this produce(enableBatching(reducer), initialState)
.
If we did enableBatching(produce(reducer))
then it's the same as pitfall (7) shown in the README.
EDIT: In our scenario we're doing undoable(produce(enableBatching(reducer), initialState))
. enableBatching
loops through an array of actions and calls the reducer
that's passed in. It doesn't assume the reducer is pure.
Unfortunately, pushing the producers inward greatly slows down our application since produce calls are expensive. We batch our actions, thousands per frame, and if we call our reducer 1,000 times, that's 1,000 produce calls.
Yeah, so it doesn't have to be entirely pushed inward indeed, just inside the undoable, for which hopefully the same concept holds; that it is only applied after all the batched actions have been executed.
Just double checking: undoable(produce(enableBatching(reducer), initialState))
works now as solid solution?
It does! I just have to make sure I do this in my root reducer:
const reducer = (s: State, a: Action): State => {
s = enableBatching((state, action) => {
// Handle actions.
})(s, a)
s.entities = entities(original(s.entities) || s.entities, a)
return s
}
And in entities.js
, export default undoable(produce(enableBatching(reducer), initialState))
.
The reason I changed the topic of this issue to the referential integrity stuff is because I would like to do this:
// root.js
const reducer = (s: State, a: Action): State => {
s = enableBatching((state, action) => {
// Handle actions.
})(s, a)
- s.entities = entities(original(s.entities) || s.entities, a)
+ s.entities = entities(s, a)
return s
}
And then in entities.js
do undoable(originalize(produce(enableBatching(reducer), initialState)))
. Where originalize
is implemented like this:
const originalize = r => (state, action) => r(original(state) || state, action)
I could then just do this:
const purifyReducer = compose(originalize, produce)
Unfortunately, because of the referential issue, I would have to do this:
originalize(undoable(produce(enableBatching(reducer), initialState)))
Meaning I can't do compose(originalize, produce)
😢
Closing as there is a workable solution
@mweststrate, it would be nice to be able to do this without having to use original
. Nevertheless, as this issue got a bit all over the place, I'll open a new one once I have some time to get a minimal repro and propose a better solution.
@migueloller : just read through this thread. I'm curious what your use case is for dispatching that many actions at once. Could you contact me @acemarke
on Reactiflux at some point, or maybe file a Redux issue for discussion?
@markerikson, happy to talk! I just sent you a message on Reactiflux.
I believe this plan of action will fix this issue (though I'm not certain yet), which I plan to carry out once #258 is merged.
Basically, we can do some internal trickery to ensure purity without losing produce
's copy-on-write behavior (even when using nested producers).
@migueloller If you have time to reproduce the issue in a minimal test case, that would be super helpful.
@aleclarson, that would be great! I'll try to get a repro as soon as I have some time, something I find quite hard to find these days 😅
This is a feature request and I'm more than happy to implement it myself if accepted.
Background
After v1.3.0, recursive
produce
calls result in a no-op. @mweststrate explained the pros and cons of this behavior here. We're building a project with Redux and have been taking advantage of immer to reduce boilerplate in our reducers. While optimizing our use of Redux by batching actions to avoid excessive calls toproduce
, we've run into an issue with this behavior.Issue
To reduce boilerplate in our Redux reducers, we implement all of our reducers as impure reducers and make them pure by calling
produce(reducer, initialState)
. Some of our reducers use higher order reducers (e.g.,redux-undo
). Because of Redux's contract, high order reducers will often make the assumption that reducers are pure. Unfortunately, after v1.3.0,produce(reducer, initialState)
will only "purify" your reducer if you're not inside aproduce
call.Our current solution is to avoid calling
produce
recursively, which removes the ability to compose our reducers. @mweststrate's point (ii) in the comment is a great observation, but for us it's not an issue when composing reducers since Redux reducers are pure.Here's a small example showing the desired behavior:
This is a sub-reducer that holds a slice of the entire Redux state. One might want to wrap it in a higher order reducer like
redux-undo
.This is the root reducer which has a slice of state handled by
counter.js
. Thecounter
reducer should be pure, but when it's called within aproduce
call, which it is, it loses it's purity.Instead,
reducer.js
has to be modified to avoid recursiveproduce
calls:Proposal
Enable recursive calls of
produce
by default like pre-v1.3.0. Add a new methodsetRecursiveProduce
(for lack of a better name) that would allow configuration of this behavior like post-v1.3.0.Would love to hear some thoughts on our approach and if other people would find something like this useful. If this is something the maintainers would accept, I would gladly submit a PR.