Closed LiLatee closed 5 months ago
Hm, it could be that the refetch indeed causes the stream to stop watching the cache, since it has a no cache fetch policy.
But if you watch the cache anyway, you don't need to use the request controller anyway - just execute (client.request()
) the pull to refresh request with .NetworkOnly, this will update the cache, and the cache will refresh the original request that uses .CacheAndNetwork.
But no, this is not expected behaviour, and this test: https://github.com/gql-dart/ferry/blob/master/packages/ferry/test/typed_links/request_controller_typed_link_test.dart#L251 should ensure it does not happen.
But if you watch the cache anyway, you don't need to use the request controller anyway - just execute (client.request()) the pull to refresh request with .NetworkOnly, this will update the cache, and the cache will refresh the original request that uses .CacheAndNetwork.
It also doesn't work. I had to change requestId
to another one and then it worked (either with with use ofrequestController
or request
) 🤔 So it looks like the problem is that both request and modifiedRequest have the same requestId
But no, this is not expected behaviour, and this test: master/packages/ferry/test/typed_links/request_controller_typed_link_test.dart#L251 should ensure it does not happen.
But that test doesn't use no cache fetchPolicy 🤔
So it looks like the problem is that both request and modifiedRequest have the same requestId
But unfortunately it won't solve the issue with fetchMore because it has to have the same requestId in order to merge results into a single list if I am not wrong.
Yes, but is also does not even use a cache, it just tests the request controller typed link. It's just an assertion that the previous stream is continued to be listened.
similar tests with a full client are below (e.g. https://github.com/gql-dart/ferry/blob/master/packages/ferry/test/typed_links/request_controller_typed_link_test.dart?rgh-link-date=2024-01-30T16%3A51%3A24Z#L408 ), maybe you can craft a reproducible example out of that?
It also doesn't work. I had to change requestId to another one and then it worked (either with with use ofrequestController or request) 🤔 So it looks like the problem is that both request and modifiedRequest have the same requested
Yes, in that case you can't use the same requestId.
But unfortunately it won't solve the issue with fetchMore because it has to have the same requestId in order to merge results into a single list if I am not wrong.
If you want caching + pagination to work together, I would recommend just merging all the pages in the cache, and removing the arguments for paging from the cache.
like here: https://ferrygraphql.com/docs/cache-configuration#the-fields-property
alternatively,/additionally you can also add the read
function for the type policy and read all the pages there (or just the ones that you want)
If you want caching + pagination to work together, I would recommend just merging all the pages in the cache, and removing the arguments for paging from the cache. like here: ferrygraphql.com/docs/cache-configuration#the-fields-property
I've already tried that some time ago, but I had some problems. I also use updateResult
field of OperationRequest and merge results there and because of that, I couldn't find a way to properly merge results. But I will try it tomorrow once again.
maybe you can craft a reproducible example out of that?
I have a list of things that I would try to do regarding ferry, but I need to finish migrating to ferry asap and I need to find some free time for improvements/fixes for ferry in the future 🙏
alternatively,/additionally you can also add the read function for the type policy and read all the pages there (or just the ones that you want)
To be honest I am a little bit confused and don't remember right now when to use read
, merge
and updateResult
, but I will take a look 🤞
I think, what you want is :
To set keyArgs
for the type policy to an empty array. This means, the Posts
request will be cached like this:
{
"Query":
"posts": [{...}, {...}] // just a single list posts
}
instead of
{
"Query":
"posts(offset: 0, limit: 3)" : [{...}, {...}],
"posts(offset: 3, limit: 3)" : [{...}, {...}],
}
So that there is just a single entry for the query.
The default behaviour of the cache is, that when A new result for the same query comes in, that the old items are replaced.
We now want to change that, so that the previous and next result are mered. You can achieve that by using the merge function as in the docs.
Now, if you watch the cache for the posts query, it will always return all cached results.
You can fetch more or refetch by just executing more requests, no need for any requestId or fetchMore params, or .requestController.add().
And mutations that update entries (like your "like" example would continue to just work.
You probably don't need to implement the read
function in the type policy.
Hmm, right now I have
{
'posts': FieldPolicy(
keyArgs: ['userId'],
),
}
I will try your recommendation tomorrow, but as I understand I have to modify the merge function from docs because it assumes that existing
and incoming
are lists, but in my case they are maps and inside these maps I have a list with posts. I will let you tomorrow.
And btw many thanks for your help 🙏
Yes, if they are maps, you should use a LinkedHashMap instead of the LinkedHashSet.
btw, here are tests that are close to your scenario: https://github.com/gql-dart/ferry/blob/7f6a63cb350a69ae838f5e3a86819ed20d82e20c/packages/ferry/test/field_policy/field_policy_test.dart#L11
To set keyArgs for the type policy to an empty array. This means, the Posts request will be cached like this: ... So that there is just a single entry for the query.
Yes that is basically something I want to achieve and I already had that with a small difference that in keyArgs
I have defined userId
param because I want to differentiate caches for different users' timelines.
And these typePolicies should be put inside Cache class? I see Client class has also typePolicies field
But I don't get the following.
Now, if you watch the cache for the posts query, it will always return all cached results.
Do I have to do something to watch the cache?
You can fetch more or refetch by just executing more requests, no need for any requestId or fetchMore params, or .requestController.add().
So the first call of my request works fine, it executes the merge function. Unfortunately, after calling once again the same query (in order to make pull to refresh) it doesn't make any request (I cannot spot it in devtools). Ok... And now I noticed that difference between original request and pull to refresh query is that pull to refresh one has NetworkOnly and original one has CacheAndNetwork. After setting both to CacheAndNetwork pull to refresh worked fine. But I would prefer to have NetworkOnly in the pull to refresh 🤔
Similar story with fetchMore, but in that case I have to set after
parameter (indicating the page to fetch) and because of that difference with original request that request is not called. In your test you modifies the offset
var and everything seems to work fine 🤔
And what about updateResult
field of a request? Do I need it? 🤔
Do I have to do something to watch the cache?
watching a stream with .CacheFirst or .CacheAndNetwork is enough.
you can also watch the cache directly using client.cache.watchQuery();
Do I have to do something to watch the cache?
yes, you would add the typepolicies map in the constructor of the cache.
Ok... And now I noticed that difference between original request and pull to refresh query is that pull to refresh one has NetworkOnly and original one has CacheAndNetwork. After setting both to CacheAndNetwork pull to refresh worked fine. But I would prefer to have NetworkOnly in the pull to refresh 🤔
how do you execute the pull to refresh and the fetchmore request?
And what about updateResult field of a request? Do I need it? 🤔
no, if you do do merging in the cache and watch the cache, you don't need requestId and updateResult.
watching a stream with .CacheFirst or .CacheAndNetwork is enough. yes, you would add the typepolicies map in the constructor of the cache.
Ok so I have both of these 👌
how do you execute the pull to refresh and the fetchmore request?
Right now I have
GuserTimelinePostsReq(
(b) => b
..fetchPolicy = FetchPolicy.CacheAndNetwork
// ..requestId = '${GuserTimelinePostsReq}_$userId'
..vars.limit = standardFetchCount
..vars.userId = userId,
)
// refetch
Client.request(originalRequest) // changing FetchPolicy causes that request is not made
// fetchMore
Client.request(originalRequest.rebuild((p0) => p0.vars.after = 'some string'))
do you also listen to the result steam / convert it to a future using .first? otherwise it may not get executed
Ahhh you are right there was even executeOnListen
parameter as I remember.
Now it makes requests for fetchMore and refetch 🎉 , but I have still one issue.
I don't want to cache all the data. I would like to cache always only the first page so e.g. after a refetch I will see only the first page and rest of them will be removed. I think that was the reason why I used updateResult
parameter of Request instead of merge
method of Cache 🤔
The reason for that is that e.g. after creating a new post we make a refetch in order to refresh the whole timeline. And because of the current merge method that new post will be placed at the end of the liat, and that's a correct behaviour just for fetchMore action, but in the case of creating a new post it should appear at the beginning of the list because it's a new created post.
I don't want to cache all the data. I would like to cache always only the first page so e.g. after a refetch I will see only the first page and rest of them will be removed. I think that was the reason why I used updateResult parameter of Request instead of merge method of Cache 🤔
you mean something like?
if(options.variables["after"] == null) {
// was refetch, clear existing from cache
return incoming;
}
The reason for that is that e.g. after creating a new post we make a refetch in order to refresh the whole timeline. And because of the current merge method that new post will be placed at the end of the liat, and that's a correct behaviour just for fetchMore action, but in the case of creating a new post it should appear at the beginning of the list because it's a new created post.
You could also append the post automatically to the cache after the create post mutation using an updateCacheHandler on your mutation.
See https://ferrygraphql.com/docs/mutations/#updating-the-cache this example is very similar to what you want
Or do you have a field in the object indicating the sort order?
you could use this to do the sorting in the cache (e.g. using the read
callback in the field policy)
Yeah, I was thinking about it, but I prefer to avoid sorting, because I know it can cause some bugs and I have many different lists with different types of items so I had to create separate merge function for every query 😞
you mean something like?
if(options.vars["after"] == null) {
// was refetch, clear existing from cache
return incoming;
}
Yeah, exactly... I am dumb 🤦♂️ (or exhausted) Thank you!
So the last problem I see is that I had to remove requestId
field from request, because adding it causes that changes in cache doesn't propagate. And unfortunately, we use requestId
to identify request which we want to refetch after some action. e.g. after creating a new post we call RefetchStore.refetchQueries(['requestId'])
.
Maybe it won't be a big issue and I will find a workaround, but do you have an idea why I can't set requestId
in that case? 🤔
So the last problem I see is that I had to remove requestId field from request, because adding it causes that changes in cache doesn't propagate. And unfortunately, we use requestId to identify request which we want to refetch after some action. e.g. after creating a new post we call RefetchStore.refetchQueries(['requestId']). Maybe it won't be a big issue and I will find a workaround, but do you have an idea why I can't set requestId in that case? 🤔
Maybe you can show a reproduction sample. I don't really understand what exactly you are doing, where you set request ids and how you execute the requests.
I set requestId
to originalRequest, then I modify original request for refetch and fetchMore
GuserTimelinePostsReq(
(b) => b
..requestId = '${GuserTimelinePostsReq}_$userId' // adding that doesn't propagate changes from cache to the stream related to that request AFTER refetch or fetchMore action
..vars.limit = standardFetchCount
..vars.userId = userId,
)
// refetch
Client.request(originalRequest.rebuild((b) => b.
..fetchPolicy = FetchPolicy.NetworkOnly))
// fetchMore
Client.request(originalRequest.rebuild((b) => b.
..fetchPolicy = FetchPolicy.NetworkOnly
..vars.after = 'some string'))
But sure I will try to prepare a test that fails or a reproduction sample, but probably not faster then in next week 😞
A yes, I think this is a real issue.
The RequestControllerTypedLink switchMaps to the new request stream, which does not watch the cache:
Okay, got it. So for now. MANY THANKS @knaeckeKami ❤️
Hi @knaeckeKami I have another question 😅 What if I have such a query?
It will be stored under Post
key right?
'post': FieldPolicy(
keyArgs: ['postId', 'sortBy'],
merge: _postCommentsMergeFunction,
),
and then in merge function it looks like that:
And as I understand it doesn't differentiate Posts object so for every Post I always get the same existing object. Am I right? 🤔
Ohh I see probably the problem. I should use id
instead of postId
. My mistake.
I don't fully understand the question.
You can inspect the cache by getting all the keys of the Store
of the cache and then look up the keys to understand how the cache stores the graph data.
The after
key arg won't work, since the post
field of the Query does not have and after arg. See the docs how to use nested keys.
I am closing the issue as I solved it by listening for changes when using FetchPolicy.NetworkOnly
.
More about it here in the PR https://github.com/gql-dart/ferry/pull/579.
Hey! 👋
I have such a case. I have a request let's say
GpostsReq
. I use that request withCacheAndNetwork
FetchPolicy. But for pull to refresh action I put intoClient.requestController.add(modifiedRequest)
modified request which has setNetworkOnly
FetchPolicy, so I display results only from network. And let's check the following steps:GpostsReq
so I can see that my post has been liked.NetworkOnly
FetchPolicy.The same story with fetchMore action for which I have set
NoCache
FetchPolicy.Am I missing something or that's expected behaviour? 😞