Open garrettg123 opened 11 months ago
How new item is added to array? I guess that you have just a mutation which gets you this new child item as response? Anyway, assuming you have an object like:
{
id: 1,
children: [{ id: 2, name: 'name 2' }],
anotherAttr: 'x',
}
and you want to add { id: 3, name: 'name 3' }
to children
, then in the mutation you need to have response with old children and the new one:
{
id: 1,
children: [{ id: 2, name: 'name 2' }, { id: 3, name: 'name 3' }],
}
Alternatively, if your mutation response is just { id: 3, name: 'name 3' }
, you need to reformat the response to above.
In the future I will think about some other ways, like in this issue - https://github.com/klis87/normy/issues/1 , which could allow array transformations also for top level arrays.
In this particular case, it seems to be caused when refetching infinite data in tRPC:
// without this, the Task objects aren't updated for task.get queries
utils.tasks.list.getInfiniteData({ type: 'todo' })?.pages.map(page =>
page.items.forEach(task => {
console.log('Invalidating task %o', task.title, task)
utils.task.get.invalidate(task.id)
})
)
// with this, only tasks.list infinite query is updated
utils.tasks.list.refetch({ type: 'todo' })
And it's not just arrays. So in this case, I need to refetch everything anyways and the normalizing isn't useful.
What I would expect is refetching an infinite query list would normalize each item and update the cache, which would then update the other queries with the same items (ie. in my case updating tasks.list would update each task.get).
utils.tasks.list.refetch({ type: 'todo' })
will invalidate only this query. Refetching a query will not update any other query, as query is supposed to be read only. Only mutations responses data work like this. This is by design, otherwise any query would affect other queries, which would be confusing but also it could cause performance issues.
To fix your problem, instead/or apart from invalidation of { type: 'todo' }
, you need to update data of this query yourself, then new data will be normalized and other related query should be updated. see https://github.com/klis87/normy/blob/master/examples/trpc/src/components/app.tsx#L40 as example .
If this does not help, please extend example I pasted you to show a minimal example what does not work.
Got it, so I should receive the updated objects in a mutation rather than a query. That makes a lot of sense.
What I'm doing is running a job, then when it's done, I refresh the data. I'm not sure if there's a better way than invalidating everything that was updated because the data can change very drastically so it might be easier just to refetch it all. I just thought that query data was also normalized and updated other queries that return the same object.
EDIT: Does normy already know which queries contain a certain object? Because perhaps there can be some functionality to invalidate only the queries containing the updated object(s) rather than doing so manually (ie. searching through an infinite query).
I just thought that query data was also normalized and updated other queries that return the same object.
After query data is saved, indeed its data is normalized. But that's it. It is assumed, that fetching a query should not change data of other queries, because it is usually GET request.
If a query data is different though, then normalized objects will be updated, but other queries from framework of your choice (like trpc) will not be updated.
On mutation it is a different thing. Its data are used to calculate which queries are dependent on those objects, then those objects are merged with normalized data, and then each dependent query is recalculated by denormalization and updated, so you do not need to update it yourself or invalidate.
The only time for now when it does not work is endoints like GET /books, and mutations like DELETE book/1 , as without extra hints, the lib cannot know how to update top level arrays. This is a feature I am going to work on after 1.0 is released.
Does normy already know which queries contain a certain object?
Yes, it knows and this is how automatic data update after mutation works.
In your case, if you know the object, and for some reason you do not have it in mutation, or you do not have a mutation at all, perhaps you could use https://github.com/klis87/normy/tree/master/packages/normy-react-query#useQueryNormalizer-and-manual-updates-arrow_up
But if the update contains array mutation, perhaps it will not work in your case. So, if you would like to have a way to know which queries to invalidate, we could introduce a new method, like getDependentQueries
, which then you could map over and invalidate for example. Please let me know if this could help your use case, actually recently I though about such feature, but I was not sure it could be useful.
Yes that's a good idea. I thought normalizing meant any time new data is received, mutation or query, it updates the normalized version and all the dependent queries. getDependentQueries
would help here. I'm wondering if there's a good case for a query's cache to not be updated when new data is received.
I'm wondering if there's a good case for a query's cache to not be updated when new data is received.
My reasoning is the following:
This was my original concept, and actually the last point would not be so bad anymore, thanks to https://github.com/klis87/normy/releases/tag/%40normy%2Fcore%400.9.0 mutation optimalization actually no query would be updated unnecessarily.
This is the related discusssion - https://github.com/klis87/normy/pull/4 , I think we could reconsider this feature to update all relevant queries as well after query data - because of this concept mutation optimization which will prevent performance from downgrading. We could also allow this behaviour to be configurable like for normalized
prop, you could turn it on for all queries, or just for some.
What do you think?
Hello, thanks for inviting me to the discussion :)
As I wrote in this thread, I still believe that adding/removing new data is the responsibility of the backend, so if I perform a mutation that in general I only assume will add a new object to some cached collection, I just have to make a repeated request to the server.
All I want to get from normalization is the synchronization of existing data. Therefore, for corner cases, when I am really sure that some object will be added/deleted to the existing collection, this functionality is quite sufficient.
I'm wondering if there's a good case for a query's cache to not be updated when new data is received.
The simplest example: you have a list of comments and a text field for entering new comments.
After clicking the "Submit Comment" button, you will see a pop-up window with the text of your new comment and confirmation that it was successfully sent to the server.
However, before this comment appears in the comment list, for example, it must be moderated, or perhaps it will be rejected altogether, or the server will display comments paginated. There are a lot of options, and you will either end up with invalid data in the list, or you will have to duplicate all the server logic.
Just invalidate the request :)
@pklsk Ok, so if I understand you correctly, you are not a fan of in memory array data updates, like adding/removing items from arrray. But I wonder, what is your opinion on the following, you have below data:
const books = [{ id: 1, name: 'name' }]
const book = { id: 1, name: 'name' }
Imagine you refetch book
and you suddenly get { id: 1, name: 'updated name' }
.
Do you expect books
to be updated as well automatically for you? Or nor, because book
refetch should not affect books
query?
As currently for a mutation with response like { id: 1, name: 'updated name' }
, ofc both book
and books
will be updated, because this is mutation response.
But after any query refetch, only normalized data will be updated, but other queries will not be updated automatically.
Do you expect
books
to be updated as well automatically for you?
Yes, this is exactly the behavior I expect :)
At first glance, this may seem contradictory, but in this matter I rely on the fact that normalization allows you to update and synchronize existing data.
As currently for a mutation with response like
{ id: 1, name: 'updated name' }
, ofc bothbook
andbooks
will be updated, because this is mutation response.
This is exactly what I am talking about above. If, in response to a mutation, the server sends updated data for a specific object that is already stored in the cache, I simply get its current state.
In general, to summarize, I think that all instances of objects should be synchronized both in response to mutation and in response to request. But adding/removing an object to the collection should only be in response to a corresponding request to the server.
But, of course, this is just my personal opinion :)
Thanks for your opinion @pklsk and @garrettg123 , I think that I will add some option which could turn on this behaviour (updating queries on queries responses) as a new config option, so that you could turn in on globally, or per dedicated queries.
But I could start working on this only after I finish working on rtk-query integration, which could take like several weeks.
For now I recommend you just using setNormalizedData
on queries success, which will actually do the very same thing.
But adding/removing an object to the collection should only be in response to a corresponding request to the server.
In this regard, personally I have a different opinion. Imagine you have a list with 100 items, and you have a mutation which adds item to this list. For me, it is much better to add item to the list in memory, instead of invalidating list query. It is so much wasteful to refetch big arrays if you can do this manually yourself.
The issue with this is though ofc that you need to do it manually.
I am wondering, what is your opinion about this feature - https://github.com/klis87/normy/issues/1
I just read (most of) the discussion on this thread and on #4. I see the main argument against adding automatic query synchronization is that "queries should only read data and not mutate it"... if I understand correctly, this slightly misses the point.
The "state" we're dealing with here is external to the application - typically lives on the server - and so our application is not the only possible mutator of data. There could be other people using the app mutating the same piece of "data" you're dealing with on your app. Which makes the case for taking the most recent data, wherever it comes from, query or mutation response, as the source of truth and add it to the normalized cache.
Furthermore, Apollo Client, which is arguably the de facto normalized cache implementation for the frontend, behaves exactly like this. The most recent data that arrives on the client, be it from a mutation reponse or from a query (refetch or not), is considered the latest piece of data for such entity, and thus should be updated in the normalized data pool.
Since this library aims to emulate that the normalized cache is the source of truth, this then should cause any query/component that relies on that object to be updated as well, regardless of "how" it is accessing the object (i.e. by which query).
As the performance problems would mostly go away now we have the strcuturalSharing and equality comparison, is there any reason not to reopen & merge #4?
Furthermore, Apollo Client, which is arguably the de facto normalized cache implementation for the frontend, behaves exactly like this.
I did not know this, this is a very big argument for adding this feature
As the performance problems would mostly go away now we have the strcuturalSharing and equality comparison, is there any reason not to reopen & merge
The problem is that we need to support all plugins, plus we need to make sure that some infinite loops will be prevented when syncing cache and updating data manually. So some extra work is required. I am adding this to my list, but I do not promise I will do this within 2 weeks or so, as I am busy at the moment. However, if you have some, I would really appreciate some help. Or, as a last resort, we could try to do this with one plugin at a time, in a way which will not break other plugins. So maybe #4 is ready or almost ready to be merged, we would need to make sure and test it though.
In the meantime, setNormalizedData
could be used to do this synchronization manually, so we get a data from a query, and we call setNormalizedData
which will sync all queries.
I have an object with a child array that gets normalized. However, when a new item is added to the array, the parent doesn't show the new object in the array. How can I have it update the array?