Closed kumar303 closed 5 years ago
Immutable.JS works best for collections of data, and the larger the better. It can be slow when your data comprises lots of small, simple JavaScript objects, with each comprising a few keys of primitive values.
🤕 it does not look like a good fit for us...
If the issue is about removing delete
calls in the reducers (which is clearly a programming mistake, partly due to JavaScript being a bad programming language), we could probably use ESLint instead/only?
Yes, I think we could use a lint rule to discourage delete
usage but there are other referencing issues that can pop up, too. If you copy an object and re-assign one of its properties then you might be editing the old state tree as well. Maybe we just need to get better about reviewing reducer code but I worry that it will be hard to catch all mutation mistakes.
While this might be a good idea, I don't think we'll realistically implement this so I am going to close this issue for now. We can still reopen it if needed.
For QA: please check for regressions on the following pages: TBA
The Redux pattern only fully works if the state object is 100% immutable. Usually we are pretty good about this and can catch mistakes in code review but mistakes still creep in (see https://github.com/mozilla/addons/issues/12278).
Let's remove the cognitive burden of looking for mistakes and just use ImmutableJS, which will throw an error for any accidental mutation.
https://redux.js.org/recipes/usingimmutablejs
There are some downsides that we should consider -- outlined in the article above.