reduxjs / redux-toolkit

The official, opinionated, batteries-included toolset for efficient Redux development
https://redux-toolkit.js.org
MIT License
10.73k stars 1.17k forks source link

[question] - invalidate a result in cache when using "merge" #3733

Open koukalp opened 1 year ago

koukalp commented 1 year ago

Description

I am trying to understand the right approach. Can you please give me an advice?

I am using a query enpoint with merge feature to "paginate" and keep all the results rendered on the page (clicking "next page" will display more results under those already displayed). I use the merge functionality to group all results together (along with forceRefetch and serializeQueryArgs). Which works. Now I want to be able to "remove" one of the results. Invalidating the cache (using invalidateTags or even calling api.utils.invalidateTags doesn't have the desired effect. While the query IS triggered, the cache is NOT updated and the deleted result is still displayed.

In https://redux-toolkit.js.org/rtk-query/api/createApi#merge, I read: "no automatic structural sharing will be applied - it's up to you to update the cache appropriately"

What's the recommended way of achieving it? Reading the docs, I don't see a way to achieve that. Read some SO answers about using entityAdapter or using prevPage, currentPage and nextPage, but it just sounds too complicated approach for such a simple functionality I am trying to achieve = delete a single result from the cache, identified by it's ID

My expectation (sort of what I thought would happen) I assumed the result would be invalidated the moment I define

invalidateTags: ({id}) => [{type: "Result", id }] // just illustrates what I mean

Seems, when I opt for using merge, this feature gets skipped

narasimhajupally commented 1 year ago

facing the same problem, I want to delete the list of items I have merged till now when the tag is invalidated so that a fresh list is made

narasimhajupally commented 1 year ago

I am dealing with a recommendation endpoint (no query args) that gives a different set of results every time it is called so I used the merge function to append the results for every call but when I invalidate the tag for this endpoint the request is made and results are added to the existing list rather than replacing.

can the merge function have the information that the tag is invalidated? then we put a check there

narasimhajupally commented 1 year ago

another solution is to provide an option similar to keepUnusedDataFor like deleteDataOnTagInvalidation which if true deletes the current data and replaces it with API response.

Placeblock commented 11 months ago

Stumbled across the same problem today... Is there any solution for this?

It would be really cool if you could modify the request that is being sent if tags are invalidated. In my example I have a list which I fill with merge. It would be cool if the request that is called if tags are invalidated does not use the args of the last request (only one chunk of my list) but args to load everything that was loaded before. However for this there are two things needed: Access to the current state & information whether the request is a request made because of tag invalidation...

haleyngonadi commented 10 months ago

Hi @koukalp Did you ever figure this one out?

kylekirkby commented 10 months ago

My team is also running into this issue right now.

Is the only option right now to manage the state independently from the RTK Query cache entries?

markerikson commented 10 months ago

I'm not sure I understand the problem that's being described. merge is a callback for updating the data inside of a given cache entry, by letting you combine the old data + the response data in some way. It doesn't have anything to do with "invalidating a tag", which would trigger a refetch for a given entry.

Can folks give more details on what exactly you're trying to accomplish?

haleyngonadi commented 10 months ago

Hi @markerikson, when using rtk query with paginated results, we would like to reset the cached data but invalidating isn't working and neither is refetch. On my end, I'm using it with react native's flatlist and when I pull down to refetch, nothing happens (doesn't refetch)

markerikson commented 10 months ago

@haleyngonadi I'm not clear on what you expect to have happen, exactly.

Are you expecting a full refetch to happen? Are you getting back data with a request, and the question is how to replace the data that was already there using the merge callback?

haleyngonadi commented 10 months ago

@markerikson A full refetch to happen. Pretty much go back to what it was when the user originally landed on the view.

Now that you mention it, I am getting back data with the request, but it doesn't replace the cache. So the question indeed is how to replace the data already there!

markerikson commented 10 months ago

@haleyngonadi The merge callback is effectively a standard Immer produce function, which means that you can return a complete value instead of mutating the draft argument. So, you should be able to literally return responseData inside of merge (instead of doing something like cachedData.push(newStuff), and it should replace the existing data in that cache entry.

haleyngonadi commented 10 months ago

@markerikson Ahh, I see. Thank you. Would that still apply if I use createEntityAdapter?

merge: (cache, response) => {
        postAdapter.addMany(cache, postSelectors.selectAll(response));
      }
markerikson commented 10 months ago

@haleyngonadi : yes, although there's two things that would need to change in that snippet:

haleyngonadi commented 10 months ago

Thank you for taking time to explain, @markerikson! I call that in transformResponse, is that the wrong way to do it?

getPosts: builder.query({
      query: queryArgs => ({
        url: 'v1/feed?stream=profile',
        method: 'GET',
      }),
      providesTags: (result, error, page) =>
        result
          ? [
              ...result.ids.map(id => ({ type: 'Feed' as const, id })),
              { type: 'Feed', id: 'PARTIAL-LIST' },
            ]
          : [{ type: 'Feed', id: 'PARTIAL-LIST' }],
      serializeQueryArgs: ({ queryArgs }) => {
        const cacheKey = `v1/feed?stream=profile`;
        return cacheKey;
      },

      transformResponse(response: FeedPostResponse) {
        const { data: items, meta } = response;
        return postAdapter.addMany(
          postAdapter.getInitialState({
            next_cursor: meta.cursor.next,
          }),
          items,
        );
      },
      forceRefetch({ currentArg, previousArg }) {
        // Refetch when the cursor changes (which means we are on a new page).
        return currentArg?.cursor !== previousArg?.cursor;
      },

      merge: (cache, response) => {
        postAdapter.addMany(cache, postSelectors.selectAll(response));
        cache.next_cursor = response.next_cursor;
      },
      // 1 hour in seconds
      keepUnusedDataFor: 1800,
    })
markerikson commented 10 months ago

There's a couple tricky bits here:

Going back to the question of "how do I replace the existing data?", for the specific case you've got here, you'd want to use postsAdapter.setAll(cache, response). The issue is how does that logic in merge know if it should do the "replace" vs the "update" handling. I don't have an immediate answer for that, for multiple reasons: I don't know if we do expose a way to know that into merge atm (ie, some argument that indicates "this was a refetch() response" or something); I don't know your specific codebase's needs; and also I'm replying to this while doing real day-job work and only have limited brainpower available atm :)

haleyngonadi commented 10 months ago

@markerikson Haha I completely understand. Thank you for the insight you've already provided :)

koukalp commented 10 months ago

@haleyngonadi - for me I need to delegate this to redux, where I can easily create reducers that either add ("onNextPageLoaded") items or remove a single item ("onDeleteItem") from the collection.

It just feels like a missed opportunity, bc 1) I am basically duplicating the rtk-query cache 2) I am not getting any benefits of the rtk-query cache while I add a lot of manual "labor"

@markerikson - it feels the discussion went a bit off. My expectations are:

  1. I use rtk-query with caching and merge function to implement infinite scroll to view a list of items
  2. An item can be deleted from the list => in that case I want the deleted item to disappear from the cache.
  3. My issue is, invalidateTags stops to work the moment I use merge and I don't see a way to manipulate the cache to remove the (deleted) item

As mentioned to @haleyngonadi above, the solution with using redux and a few reducers is quite straightforward, however then I pretty much opt out the rtk-query cache which feels like a missed opportunity

markerikson commented 10 months ago

@koukalp yeah, there's a couple different sub-threads going on in this issue, which is also why I'm confused myself :)

My first note is that the whole "use merge as an infinite query implementation" thing is, frankly, a hack. And I was the one who came up with it, and I knew it was a hack at the time, and it's still a hack, and I don't like it.

I want us to ship a "real" infinite query implementation at some point. Ideally this year. I've got another thread that's open to ask folks like you about specific use cases you would need it to handle, and I'd really appreciate it if you could leave some related thoughts over in that thread:

That said, it'll also be at least a couple months before we finally have time to comb through the list of open RTKQ feature requests, do some prioritization and planning, and then actually start to work on them. So realistically I wouldn't expect us to be able to ship something like that until at least the end of the year, and the merge approach is what we've got to live with for now.

For your specific case: I don't have enough context to understand what the actual technical issue is with "deleting an item" and "tag invalidation not working". Could you provide an actual repo or CodeSandbox that shows this behavior? Right now there's some comments in this thread that feel kind of vague describing possible behavior in apps, and I don't know what folks are actually doing as far as configuration, or what specific behavior you're seeing at runtime.

Also, what about tag invalidation is not working as expected?

koukalp commented 10 months ago

@markerikson - I love the answer :D

Essentially it answers my question - what I am doing is not really recommended / supported and therefore no surprise it doesn't work like my imagination would like it to work 👍

I will try to prepare a representative case and that should answer all your questions.

Thanks!

anujmv commented 9 months ago
import { createApi, fetchBaseQuery } from "@reduxjs/toolkit/query/react";

// Define a service using a base URL and expected endpoints
export const pokemonApi = createApi({
  reducerPath: "pokemonApi",
  tagTypes: ["Pokemon"],
  baseQuery: fetchBaseQuery({ baseUrl: "https://pokeapi.co/api/v2/" }),
  endpoints: (builder) => ({
    getPokemonByName: builder.query({
      query: (page) => `pokemon?offset=${page * 20}&limit=20`,
      // Only have one cache entry because the arg always maps to one string
      serializeQueryArgs: ({ endpointName }) => {
        return endpointName;
      },
      // Always merge incoming data to the cache entry
      merge: (currentCache, newItems, { arg }) => {
        console.log(arg);
        if (arg === 0) {
          return newItems;
        }
        currentCache.results.push(...newItems.results);
      },

      // Refetch when the page arg changes
      forceRefetch({ currentArg, previousArg }) {
        return currentArg !== previousArg;
      },
      providesTags: ["Pokemon"],
    }),
  }),
});

// Export hooks for usage in functional components, which are
// auto-generated based on the defined endpoints
export const { useGetPokemonByNameQuery } = pokemonApi;

In react i manually set Page to 0 when needed to clear the cache

luisfelipediaz commented 9 months ago

I'm having the same issue, in my case, I also have the page size, a query filter, and other filters, so when I change any filter, I reset the current page to 1 and re-fetch the data, but I need to show the "skeleton" while the completely new filters are retrieving.

export const usersServices = api.injectEndpoints({
  endpoints: build => ({
    getUsers: build.query({
      query: ({ companyId, page }) => ({
        url: EndPoints.USERS.GET_USERS,
        method: 'POST',
        body: buildQueryBody(companyId, page),
      }),
      providesTags: ['Users'],
      serializeQueryArgs: ({ endpointName, queryArgs }) => {
        return `${endpointName}("${queryArgs.companyId}", ${queryArgs.level}, "${queryArgs.searchQuery}")`;
      },
      forceRefetch({ currentArg, previousArg }, ...args) {
        if (!previousArg) return true;
        return (
          currentArg.companyId !== previousArg.companyId ||
          currentArg.page > previousArg.page ||
          currentArg.level !== previousArg.level ||
          currentArg.searchQuery !== previousArg.searchQuery
        );
      },
      merge: (currentData, responseData) => {
        currentData.pagination = responseData.pagination;
        currentData.users.push(...responseData.users);
      },
    }),
  }),
});

My workaround:

const { data, isLoading, isFetching } = useGetUsersQuery({
    companyId,
    page,
    ...filters,
  });

// Here is my approach
const users = isFetching && page === 1 ? null : data;

So, when isFetching and is page 1 I'm setting users to null to simulate that the cache is null.

is there a way to prevent this validation?

gimi-anders commented 8 months ago

@markerikson I will give you a concrete example of when deleting an item will cause a problem with invalidation, when using infinite scroll.

Let's say there is 15 items in total on the server. I fetch 10 items per page, and I now fetch page 1 and then scroll down to fetch page 2, and merge the results from page 2 with the existing ones. So the last query url called was: https://myserver.com/api/items?page=2

I now use a mutation endpoint to delete one item that was fetched on page 1. This mutation will invalidate tags that was provided in the query above. Problem is that the query above will now re-fetch page 2 since that was the last page fetched. And this page has not changed (or it has changed since it contains one less item, but the actual removed item was not affected). Since we are using a merged query where the page number is not part of the cache serialized query args, there is no way for the invalidation to know which page to re-fetch. Basically all pages would have to be re-fetched.

It gets even worse if we started with 11 items on the server. In that case, when the deletion is done the invalidation will cause refetching of page 2 which in turn will return a 404 error from the server since that page does not even exist anymore.

To resolve this issue we cannot use invalidation for this, instead we have to manually update the cache with optimistic per pessimistic updates.

ievgen-klymenko-uvoteam commented 8 months ago

Yes, we have almost the same issue. with Infinite scroll, as gimi-anders .

Automatic tag refetch always calls the query with last arguments provided, which of course ends up in the same merge().

In our case, refetching all pages and erasing cache would be a solution. If merge()/query() calls would receive some indication about 'this particular call was a refetch', this would be possible to do I guess.

Manual cache updating is hard to implement here since it will need to be added in too many mutations. For now, we will unfortunately have to invalidate some upper-tree (wrapping) queries to erase this cache completely.

But at least it works some way :) If @markerikson says it is was a hack solution from start, why would it work better than one :)

I wish @markerikson and team best of luck on this issue, since it sounds not an easy one. Infinite scroll is used all around lately, Can't wait to see what solution RTKQuery implements to fully support it!

haleyngonadi commented 8 months ago

It seems a lot of people are having issues with this, and I have figured out how to make it work with the Entity Adapter, and it works great!

export const feedAdapter = createEntityAdapter<FeedItemProps, number | string>({
  selectId: item => item.id,
  sortComparer: (a, b) => b.created.localeCompare(a.created),
});

export const feedSelectors = feedAdapter.getSelectors();

export const feedApi = api.injectEndpoints({
  endpoints: builder => ({
    getFeedPosts: builder.query<EntityState<FeedItemProps, string> & {
        next_cursor?: string;
      },
      FeedPostRequest
    >({
      query: params => ({
        url: '/url-to-api',
        method: 'GET',
        params,
      }),
      providesTags: (result, _error, arg) =>
        result
          ? [
              ...feedSelectors.selectAll(result).map(({ id }) => ({
                type: 'Feed' as const,
                id,
              })),
              'Feed',
            ]
          : ['Feed'],
      serializeQueryArgs: ({ endpointName }) => {
        return endpointName;
      },

      transformResponse(response: FeedPostResponse) {
        const { data: items, meta } = response;
        return feedAdapter.addMany(
          feedAdapter.getInitialState({
            next_cursor: meta.cursor.next,
          }),
          items,
        );
      },
      forceRefetch({ currentArg, previousArg }) {
        return currentArg?.cursor !== previousArg?.cursor;
      },

      merge: (currentCache, newItems, otherArgs) => {
        const cacheIDs = feedSelectors.selectIds(currentCache);
        const newItemsIDs = feedSelectors.selectIds(newItems);

        const updateIDs = cacheIDs.filter(x => newItemsIDs.includes(x));
        const newIDs = newItemsIDs.filter(x => !cacheIDs.includes(x));
        const deletedIDs = cacheIDs.filter(x => !newItemsIDs.includes(x));

        const updateItems = updateIDs.map(id => ({
          id,
          changes: feedSelectors.selectById(newItems, id) ?? {},
        }));

        const brandNewItems = newIDs
          .map(id => feedSelectors.selectById(newItems, id))
          .reduce(
            (prev, value) => ({ ...prev, [value?.id ?? '-1']: value }),
            {},
          );

        feedAdapter.updateMany(currentCache, updateItems);

        if (otherArgs.arg?.last_seen_post_created_at) {
          feedAdapter.addMany(currentCache, brandNewItems);
        } else {
          const oldState = feedSelectors.selectAll(currentCache);
          feedAdapter.removeAll(currentCache);
          feedAdapter.addMany(currentCache, brandNewItems);
          // remove any posts that exists in deletedIDs
          const updatedState = // remove deletedIDs from oldState
            oldState.filter(post => !deletedIDs.includes(post.id));
          feedAdapter.addMany(currentCache, updatedState);
        }

        currentCache.next_cursor = newItems.next_cursor;
      },
      // 1 hour in seconds
      keepUnusedDataFor: 1800

  })

  })
});
vancerbtw commented 6 months ago

@haleyngonadi can you please provide your use of the query in the UI code? I am a bit confused by your use of 'otherArgs.arg?.last_seen_post_created_at' to determine if the merge is an invalidation that or a true merge. Trying to determine how to handle the merge has been my biggest struggle in regards to getting invalidateTags to work

MrSachin7 commented 5 months ago

I am currently getting this issue as well.

So, the backend server paginates using the field 'created_at', so the newest items comes on the first page. When I create a new item and invalidate the cache, it invalidates the last sent query , thus invalidating the query with the page number last sent.

If the last sent page number is not 1, the added item will not show until I hard refresh the page since, the added item is on page 1 on the backend which is never being refetched by RTK Query.

Does anyone have a quick workaround this ? I would even like a way to invalidate all queries regardless of the pageNumber.

ievgen-klymenko-uvoteam commented 5 months ago

I couldn't go with EntityAdapter solution that @haleyngonadi provided above, for reasons i cant remember.

My solution was straightforward, although not very intuitive. But it required changes on backend (api query data results).

Here is api slice:

const { useGetPostsQuery } = query('getPosts', {
  query: ({ page }) =>
    API_POSTS_URL + (page > 1 ? `?page=${page}` : ''),

  serializeQueryArgs: ({ endpointName }) => endpointName,

  merge: (currentCache, newItems) => {
    const newPage = newItems.data?.page || 1
    if (currentCache.data.page === newPage) {
      return
    }

    const newPosts = newItems.data?.posts || []
    const hasMore = newItems.data?.hasMore || false

    currentCache.data.posts?.push(...newPosts)
    currentCache.data.page = newPage
    currentCache.data.hasMore = hasMore
  },

  forceRefetch: ({ currentArg, previousArg }) =>
    currentArg?.page !== previousArg?.page,
})

As you can see, we are manually comparing and saving page and hasMore "cursor meta info" right there, in the currentCache (where posts data is normally saved). And backend sends us this meta data along with every page query

Then, in the component, we control page number manually using usual state:

const Posts = () => {
  const [page, setPage] = useState(1)

  const { posts, hasMore, result } = useGetPostsQuery({ page })

  const loadMorePosts = () => setPage((prevPage) => prevPage + 1)

  return (
    <InfiniteScroll
      dataLength={posts.length}
      next={loadMorePosts}
      hasMore={hasMore}
    >
      {...posts}
    </InfiniteScroll>
  )
}

@MrSachin7 hope it helps!

ensconced commented 5 months ago

I am also struggling with this. I have one paginated endpoint which is managed via merge/serializeQueryArgs/forceRefetch. I have another mutation endpoint. When the mutation request is made, I want the cache for the paginated endpoint to be completely wiped out, and for the first page to be refetched again.

This point from your earlier post @markerikson is precisely the issue:

The issue is how does that logic in merge know if it should do the "replace" vs the "update" handling. I don't have an immediate answer for that, for multiple reasons: I don't know if we do expose a way to know that into merge atm (ie, some argument that indicates "this was a refetch() response" or something);