Closed Keats closed 9 years ago
With the schema you suggested, there is duplication of data. If the same post appears in several reddits, there is no canonical representation of it in the state.
If there is no canonical representation, implementing something like EDIT_POST
becomes problematic because copies of its data can be anywhere in the state, and you'd have to update them all. This is why I don't see it as a viable solution. There should be only one copy of any model, and it should be keyed by an ID.
Note one nice thing: if you fetch a different API endpoint that still happens to include some post, it will be merged into the entity cache. This means all API endpoints “contribute to” the same local cache. With the approach you suggested, however, there is no such benefit: data is stored separately, so you can't reuse already available information if it is requested in a different context.
As for your original question, some approaches to deleting entities from an entity cache were discussed here: https://github.com/rackt/redux/issues/386#issuecomment-127051008.
In the case where elements can be present on several containers sure. But for comments on a post for example (comments belong to one post and one post only), would you have each post id as key in the comments reducers? That seems a bit wasteful to duplicate those ids potentially thousands of times
But for comments on a post for example (comments belong to one post and one post only), would you have each post id as key in the comments reducers?
Yes, it's still easier to access a comment in the state just by its ID, without thinking which post it's attached to, and without inefficient array lookup every time we need to find it (for example, to update it). Things like optimistic updates are also easier when you separate ID list from entities.
That seems a bit wasteful to duplicate those ids potentially thousands of times
I don't understand what duplication you are referring to. Please show an example.
Let's take a random thread as an example: https://www.reddit.com/r/technology/comments/3gu9b0/lenovo_crams_unremovable_crapware_into_windows/
There are 2000+ comments at the moment on that thread. Going by the approach in the docs you would have an array of 2k+ ids in the posts reducers with id 3gu9b0 and then 2k+ new keys in the comments reducers (this is the id duplication i was referring to). Now on your connect you would need to load all the comments and then filter the ones for that thread in the merge so if you have an object you'll need to Object.keys and then filter on that, which is going to be slower.
In that case it seems like it's whether you want faster loading vs faster editing
Going by the approach in the docs you would have an array of 2k+ ids in the posts reducers with id 3gu9b0 and then 2k+ new keys in the comments reducers (this is the id duplication i was referring to).
Why is this “duplication” harmful in any way? It's negligible in terms of memory usage. (Two 2k-keyed objects instead of one).
Now on your connect you would need to load all the comments and then filter the ones for that thread in the merge so if you have an object you'll need to Object.keys and then filter on that, which is going to be slower.
Um, why? You'd need postsIds.map(id => state.posts[id])
. It's pretty efficient. Worst case, put memoization like reselect on it.
Why is this “duplication” harmful in any way? It's negligible in terms of memory usage. (Two 2k-keyed objects instead of one).
If you don't clean them and use uuids for example it can add up quickly but fair enough, it's still a fairly low amount.
Um, why? You'd need postsIds.map(id => state.posts[id]). It's pretty efficient. Worst case, put memoization like reselect on it.
Sorry if i wasn't clean enough. Getting the post is easy but getting its comments would be harder, something like:
function merge(state, dispatchActions, props) {
const postId = props.post.id;
const comments = Object.keys(state.comments).map((commentId) => {
if (props.post.commentsIds.indexOf(commentId) > -1) {
return state.comments[commentId];
}
});
....
}
But after a good night sleep I realised I would probably have the comments store be emptied on transition to a new post and only ever contains one post comments.
This:
const postId = props.post.id;
const comments = Object.keys(state.comments).map((commentId) => {
if (props.post.commentsIds.indexOf(commentId) > -1) {
return state.comments[commentId];
}
});
is a rather expensive way to do this:
const postId = props.post.id;
const post = state.posts[postId];
const comments = post.comments.map(commentId => state.comments[commentId]);
Am I missing your point?
But after a good night sleep I realised I would probably have the comments store be emptied on transition to a new post and only ever contains one post comments.
It depends on the app. If there are thousands of comments under every post, maybe. If there's thousand or less, I don't see the point much. You can also schedule manual garbage collection (i.e. deleting content entities that aren't referenced from anywhere) once in a while.
First comment: doh i'm stupid you're right.
Cheers for the replies
The docs (very good by the way!) recommends to use the following store structure in the reddit example:
Looking at that schema it seems that if you're a moderator and deletes a thread for example, you might need to dispatch 2 actions or have 2 reducers answer to the same action.
Option 1: 2 actions
Option 2: 1 action & 2 reducers For that you would need to add the subreddit id to the the payload, despite probably not needing it to delete the thread itself and have an action effecting 2 reducers, which doesn't look very neat.
How about storing data like that instead:
The difference is having the child reducer keyed by the parent id, this way you only need to handle one element on delete but then you need 2 when fetching anyway (as you can see my mind changed a bit while writing the issue) to update.
How would you handle deleting/shuffling of data in the first schema?