reduxjs / redux-toolkit

The official, opinionated, batteries-included toolset for efficient Redux development
https://redux-toolkit.js.org
MIT License
10.62k stars 1.15k forks source link

Overlapping cache for mutations with same `fixedCacheKey` #4503

Open rexfordessilfie opened 1 month ago

rexfordessilfie commented 1 month ago

Background Specifying the same fixedCacheKey across mutations leads to overlapping caches between the two mutations. While the fix for this is as simple as properly ensuring cache keys are not repeated across mutations, it is a bug that can easily be run into.

Should fixedCacheKey's have this behavior, or should it result in a mutation cache specific to the endpoint it is provided for?

Example

export function useTransaction(transactionId: string) {
  const [createQuote, quoteResult] =
    api.endpoints.createQuote.useMutation({
      fixedCacheKey: transactionId
    })

  const [initiateTransaction, transactionResult] =
    chipperAirtimeApi.endpoints.airtimePurchase.useMutation({
      fixedCacheKey: transactionId
    })

   // BUG: purchaseResult == quoteResult after either `createQuote` or `initiateTransaction` is called, 
   // but ideally should each be different?

   return {
      getQuote,
      quoteResult,

      initiateTransaction,
      transactionResult
   }
}

Potential Solution

export function useTransaction(transactionId: string) {
  const [createQuote, quoteResult] =
    api.endpoints.createQuote.useMutation({
      fixedCacheKey: transactionId + ':quote'
    })

  const [initiateTransaction, transactionResult] =
    chipperAirtimeApi.endpoints.initiateTransaction.useMutation({
      fixedCacheKey: transactionId + ':initiate'
    })

...

Desired Outcome The desired outcome should be that the same cache key can be used across endpoints without their caches overlapping since they are separate endpoints.

Having diagnosed and found a solution to the problem, it is no longer a pressing issue, however, it would be lovely if it worked as expected and if it aligns with the library's goals! Thanks, and happy to help resolve.

RTK Version 1.9.0

PS: I'll be back with a demo/repro to assist, but wanted to drop this here for now.

markerikson commented 1 month ago

That seems to be exactly the behavior you "asked for" by specifying fixedCacheKey. There is one cache entry for that mutation, no matter what the arguments are, therefore there is one shared result at the endpoint level.

There definitely isn't an option to provide this at the hook level, and I'm not sure off the top of my head if that's feasible given how the internals are implemented atm.

rexfordessilfie commented 1 month ago

That seems to be exactly the behavior you "asked for" by specifying fixedCacheKey. There is one cache entry for that mutation, no matter what the arguments are, therefore there is one shared result at the endpoint level.

Thanks @markerikson. I completely agree to these. My experience was one cache entry at the api level, it appears.

Endpoint A's mutation cache was getting populated when endpoint B's mutation was fired. The common denominator being the fixedCacheKey. Is this something you have experienced?

Perhaps I can clone the repo and try to add a test covering this case (if not already present) to see if it fails.

markerikson commented 1 month ago

I think this is still a "that is what your code has configured it to do" thing :)

Internally, RTKQ keeps data in the Redux store as lookup tables where the keys are the serialized arguments:

rootState: {
  api: {
    queries: {
      ["getPokemon('pikachu')"]: cacheEntry1,
      ["getPokemon('pikachu')"]: cacheEntry2,
      ["getPokemon('bulbasaur')"]: cacheEntry3,
      ["getTodos(undefined)"]: cacheEntry4,
    },
    mutations: {
      ["addTodo('1')"]: tempCacheEntry1,
      ["addTodo('2')"]: tempCacheEntry2,
      // etc
    }
  }
}

When you use fixedCacheKey, you are enforcing that there is only ever one cache entry for that endpoint.

I think, based on a quick code skim, that we don't attempt to differentiate that by endpoint at all.

So, we don't have keys like "addTodo" + fixedCacheKey and "editTodo" + fixedCacheKey. It's just fixedCacheKey.

And in that case, yes, if you specify the same fixedCacheKey value for multiple endpoints, then those all collide, because they would all be using the exact same fixedCacheKey value, and there can only be one key with that name in state.api.mutations.

rexfordessilfie commented 1 month ago

I think this is still a "that is what your code has configured it to do" thing :)

And in that case, yes, if you specify the same fixedCacheKey value for multiple endpoints, then those all collide, because they would all be using the exact same fixedCacheKey value, and there can only be one key with that name in state.api.mutations.

Ahh okay, got it! I think it's just a thing of expectations not matching the design. So what is happening is that fixedCacheKey hits the lookup table/api state directly, with no prefixing on your behalf.

Thanks for the detailed explanation. It took me a good bit of time debugging to realize what was happening. I was seeing loading states for one mutation when it wasn't yet fired!

In light of the explanation we can close this issue. I wonder if it could be added to documentation as well? Regardless, I'll make a note when explaining to my team on how to adopt it as it's something new we are trying.

markerikson commented 1 month ago

Yeah. To be honest we probably could (and should?) prefix it. Not sure why we didn't. I can't immediately think of a good reason why you would ever want separate mutations to collide like this. The actual intent of the fixedCacheKey option is to let you trigger a mutation in one component and read it in another, but the implication is that's the same mutation hook in both places.

So, I'd honestly consider this a bug.

rexfordessilfie commented 1 month ago

The actual intent of the fixedCacheKey option is to let you trigger a mutation in one component and read it in another, but the implication is that's the same mutation hook in both places.

So, I'd honestly consider this a bug.

Yeah, that's my immediate suspicion as well. Hopefully there isn't someone out there counting on the existing collision behavior.

(If there is, we could also provide an option to skip the prefix. But that could be a bit much)

If this kind of collision is desired, it can be possible in other ways? One way is, a hook/selector which returns a merged view of the two mutation states.

I'm happy to give a go at the fix if you think we should proceed and don't mind!

markerikson commented 1 month ago

Yeah, tell you what, go ahead and file a PR.

I expect that the actual fix is just going to be something like endpoint.name + fixedCacheKey, maybe in a couple places.

phryneas commented 1 month ago

Probably only here: https://github.com/reduxjs/redux-toolkit/blob/e46eb99f135dde5f047b8dc09113c42f20a54353/packages/toolkit/src/query/core/buildMiddleware/cacheLifecycle.ts#L254

I can think of one edge-casey scenario where someone would actually want that:

endpoints: (builder) => ({
  register: builder.mutation({fixedCacheKey: "user"}),
  login: builder.mutation({fixedCacheKey: "user"}),
})

function Profile() {
  const { data } = useLoginMutation()
}

If the user registers and is directly logged in (without also hitting the login mutation), that will also fill the data for Profile (assuming they have compatible shapes)

If we ever intended that to be possible... I honestly can't remember.

rexfordessilfie commented 1 month ago

If the user registers and is directly logged in (without also hitting the login mutation), that will also fill the data for Profile (assuming they have compatible shapes)

If we ever intended that to be possible... I honestly can't remember.

Hmm, thanks @phryneas. This is a motivating case for sharing the same cache key across mutations. Leaving the behavior as is provides flexibility for people who want to do this: even if it was never intended!

Perhaps we can get away with just updating the docs to call out how fixedCacheKey behaves when shared across mutations, so users can opt out of the collision behavior intentionally?

The reverse, opting into the colliding cache, sounds like it may be more tricky/risky to achieve?

markerikson commented 1 month ago

My own take is that I don't like that pattern :)

My first reaction is that using a mutation to read data is a bad approach in the first place and you ought to be relying on a query for that. (If anything I think we ought to do more to make it easier to populate a query cache entry from a mutation result, or something along those lines.)

rexfordessilfie commented 1 month ago

My own take is that I don't like that pattern :)

My first reaction is that using a mutation to read data is a bad approach in the first place and you ought to be relying on a query for that. (If anything I think we ought to do more to make it easier to populate a query cache entry from a mutation result, or something along those lines.)

Yeah, I get you. My specific use-case is triggering the mutation on one screen, and showing the result on the next screen. Here there's no query cache to update.

Before discovering fixedCacheKey, I copied the mutation result into a slice, keyed by a custom cache key (just as fixedCacheKey) and read from the slice. For this I was responsible for managing the lifecycle of the slice cache. Maybe there's a recipe here to make this pattern a no-brainer when needed.

In my other use-case, the endpoint has a POST method, and the API is autogenerated from Postman, and so it ends up as a mutation.

Here, I'n considering overriding the API generation to treat it as a query albeit one that uses a POST method under the hood.

markerikson commented 1 month ago

Yeah, one thing that confuses people is that the HTTP POST method does not automatically imply a "mutation". It's about whether you're trying to fetch and cache data (query) or tell the server to update data (mutation).