Closed markerikson closed 3 years ago
A few random thoughts for discussion:
* How close is this to being MVP-releasable?
Maybe a week, or two? Might be beneficial to wait for the TS4.1 release
* How big is this bundle-wise?
Right now, the ESM build is 14kb gzipped. Not too happy with that myself, but it does a lot.
* How does the string reducer name matching work? Is it kind of "guess and hope the user put the reducer at the right spot"? (just going off vague bits I've seen from the source/examples)
On a TS level, in RTK you can't use the middleware if the reducer is at the wrong place. I'd rather have it complain somewhere else, but that's the only external assumption you can make with Redux types. No runtime checking atm.
* what support do we have for cancelling / de-duping requests?
* I'm finding the state structure nesting to be confusing. Granted, I'm only tangentially following the work that y'all are doing over here, but this seems like an awful lot of digging to get to the actual data:
Right now, the structure is:
state[mountPoint].queries[endpoint][serializedBaseQueryArgs] => QuerySubState
,
state[mountPoint].mutations[endpoint][requestId] => MutationSubState
state[mountPoint].provided[typeName][id] => Array<{endpoint,serializedBaseQueryArgs}>
Looking at this: we could get rid of the [endpoint]
grouping in queries
and mutations
without any further collisions. That would honestly make a lot easier. I had assumed that grouping by endpoint would be beneficial if one were to manually inspect the state, but it's probably not worth it.
And it would even have positive side effects: if two different endpoint definitions would lead to the same query being sent out, those would now consolidate.
So we'd come up with something like this @msutkowski, your thoughts?
state[mountPoint].queries[serializedBaseQueryArgs] => QuerySubState
,
state[mountPoint].mutations[requestId] => MutationSubState
state[mountPoint].provided[typeName][id] => Array<serializedBaseQueryArgs>
and might even move the subscription logic out of QuerySubState
, an additional slice there would really simplify some logic - so additionally:
state[mountPoint].subscriptions[serializedBaseQueryArgs] => SubscriptionSubState
,
I realize that having predefined selectors is supposed to encapsulate that, but it seems kinda painful to deal with if you need to poke at it manually.
* For that matter, what's the intended approach if I want to actually do want to do client-side updates of this fetched data?
Since we're explicitly dealing with non-normalized data, that would mean doing what react-query and swr do: Have the library user update the cache with an expected next response data. While I doubt that anyone would really want to do that, we could do what those libraries do: allow the user to directly manipulate cache state with some extra actions, probably in a onTrigger
callback for mutations that could be defined per mutation endpoint or hooks usage.
* How many other Redux API libs have we looked at in addition to Apollo Urql, and React-Query? I listed several over in [reduxjs/redux-toolkit#603 (comment)](https://github.com/reduxjs/redux-toolkit/issues/603#issuecomment-643374653). A few worth looking at:
So far, I've looked at
Triggered by you now, I've also taken a look at the ones you listed here:
requestKey
). While it does not have subscriptions, at least it seems to recycle old state values after a certain requestsCapacity
theshold is hit.createSlice
+createEntityAdapter
on steroids with a little request tracking on top, but does not provide any helpers on how to trigger fetching state, it's only concerned with storage.So from scanning over the docs of these libraries, I haven't really seen any killer features we definitely need in addition to what we have at the moment. But some of them might (I think mostly redux-request & redux-resource) become more relevant with a second fetching-helper-library concerned with primarily normalized data (I really think we should keep that separated, it's just too different in use case!)
Lemme give a concrete use case that I'm not sure I'm hearing we handle.
In a prior project, the app let the user edit points and polylines on a globe. Basically looked and felt a lot like Google Earth. Polyline
entries referred to a list of Point
entries for the line coordinates. I used Redux-ORM for normalized state.
The app had a treeview on the left to show Polyline
s and the Point
s inside, and displayed them on the globe. Clicking a line on the globe or the tree showed the line's details in a form. If you clicked "Edit", you could edit the line data.
Where it got tricky was that when editing a polyline, we wanted to still show the original values in the tree, but the globe and the form needed to show the current WIP values as you're editing. For example, if I add or change a point, I want to see the line on the globe reflect that immediately.
So, my solution was to copy the entire Polyline
structure and its children over into a duplicate section of state, edit the copy, and then overwrite the original value when the user saved it.
Mm... now that I describe all that, it's not quite the best use case example for the real problem use case I want to get at, but I might as well leave these paragraphs here since I just wrote them 👍
Anyway, my actual concern is that the user may want to write additional reducer logic to manipulate these items on the client side. I would assume that some updates would also need to go through the server and be saved, but it's possible the client might just want to fetch the data and then work on it more locally.
How well do we handle that, if at all?
My concern is that the use case switch over to just "caching fetched data" precludes actually doing "state management" with that data on the client side because the state structure gets too complex. Hmm. For that matter, if the data is all being stored in our "cache slice" anyway, it's gonna be hard to either A) transfer a copy of it to another portion of the state tree, or B) manipulate it inside the cache slice.
Size-wise, a quick check of Bundlephobia shows:
So, 15K seems pretty typical here, but always nice if we can shrink that
How well do we handle that, if at all?
With the scope being "non-normalized data, potentially badly structured or unstructured apis", not at all.
We only store the api response and we could let the user manipulate that directly, but it's not a normalized state and it doesn't aim to be one.
Nonetheless, the endpoints allow for the definition of a postProcess
callback that the user could user to structure the data better. But it will still be stored on a per-endpoint¶mteters basis.
I really think that's a separate concern that we'd have to solve with a second lib/entrypoint to keep both as focused on their use cases as possible.
This might just have solved endless scrolling though: if we add a per-endpoint serializeArgs
callback option, endless/offest-based scrolling could be done as in apollo: https://www.apollographql.com/docs/react/caching/cache-field-behavior/#handling-pagination in combination with postProcess
(compare merge
to postProcess
and serializeArgs
to keyArgs
)
On the other hand, I don't love apollo's approach. Gotta think about it :)
What's the definition of MVP releasable 😄 ? Technically, it'd work just fine right now, but I wouldn't push for that. There is a lot of testing that still needs to be done before I'd comfortably recommend it. I can put together a lot of testing, docs, and 'kitchen sink'-ify the existing examples really quickly, but I'd like to focus on the core first.
yeah, we don't have to have "final" docs, but I'd like to see:
axios
with some interceptors, and maybe a GraphQL fetching example if that's a doable thing?)FYI, this article has some good bullet points to consider:
https://nosleepjavascript.com/redux-data-fetching-antipattern/
Data fetching with Redux is too complicated
The current implementation has the following problems
- it is way too verbose (hooks and libraries might make it better)
- is that the right semantics for my actions?
- is that the right structure and naming for my actions?
- is that the right file structure? Official Redux docs once upon a time opposed to the “ducks” approach but now they seem to be strongly recommending it
- should I use switch or some other way of casing?
- should I normalize?
- is the state shape correct?
- are my meta flags correct?
- how would I manage multiple posts being fetched at the same time? (hint: you need to change your state shape and do more stuff)
- should I use immer (which I love BTW) or similar?
- should I use redux-toolkit?
- should I use sagas? (which is even more verbose and complex BTW).
- where are the selectors?
- how do I make my team code consistently across the multiple action/reducers that we will need to write?
- what happens if more than one component triggers the same data fetching action? Do I need to dedup http calls manually?
- do I need to make all my data fetching actions at the root of my app? (in big enough apps this causes a bad decoupling between the data requirements of your components and your components).
- what about if my app is too big and I want to more granularly define what data fetching requirements a medium level component needs to have?
- when does this fetched data expires? how do I clean it up?
I think what we're putting together here is on the right track to answer most of those, although it seems like we've got a somewhat centralized setup that doesn't resolve the "do I need to make all my data fetching actions at the root of my app?" part.
Someone just pointed me at https://amplitude.github.io/redux-query/ , which seems like it's another lib worth comparing against. Looking at it, this does seem awfully similar to what we're doing here.
From the front page:
- Simply Redux: Follow best practices for storing and handling network state in Redux, with support for features like optimistic updates and cancellation. There's no magic here, just middleware, actions, selectors, and reducers.
- Extensible: Built to fit most use cases out-of-the-box, but can easily be extended with custom Redux middleware, UI integrations, and network interfaces.
- Works Great with React: With the provided React hooks and higher-order component in redux-query-react (optional), colocate data dependencies with your components and run requests when components mount or update.
and a Twitter question thread: https://twitter.com/acemarke/status/1326344484813230081
Question for the evening: Lots of discussion about "don't use Redux for server state, use OTHER_LIBS". To me, it seems like those libs are "global stores", just purpose-built for data fetching. Is the real point here about a "global store", or what the API does for you?
Put another way:
- what makes a "server state caching lib" different than a "global store"?
- if Redux Toolkit came with an equivalent data fetching abstraction, how would that change the suggested use cases and benefits of choosing between these tools?
Okay, one more other lib to eyeball:
very different than the React-Query API approach we've got going on here, it's really more of a "super-CRUD-async-Entity-adapter" thing, but maybe relevant a little?
More discussion questions:
What about code splitting? Right now this appears to require importing all of the API objects at app setup time in order to create the middleware. That would seem to preclude a code-split feature that is lazy-loaded.
I know that lazy-loading Redux features is a common-ish use case, so we need to come up with a story there.
Can the middleware be updated to accept adding one of these things at runtime?
Also, not sure what the latest state of the API is, but looking at the sandbox at https://codesandbox.io/s/rtk-simple-query-demo-migxd?file=/src/app/services/posts.ts , this still feels too verbose:
export const postApi = createApi({
reducerPath: 'postsApi',
baseQuery: fetchBaseQuery(),
entityTypes: ['Posts'],
endpoints: (build) => ({
getPosts: build.query<PostsResponse, void>({
query: () => 'posts',
provides: (result) => result.map(({ id }) => ({ type: 'Posts', id })),
}),
addPost: build.mutation<Post, Partial<Post>>({
query(data) {
return {
url: `posts`,
method: 'POST',
body: JSON.stringify(data),
};
},
invalidates: [{ type: 'Posts' }],
}),
getPost: build.query<Post, number>({
query: (id) => `posts/${id}`,
provides: (result, id) => [{ type: 'Posts', id }],
}),
updatePost: build.mutation<Post, Partial<Post>>({
query(data) {
const { id, ...post } = data;
return {
url: `posts/${id}`,
method: 'PUT',
body: JSON.stringify(post),
};
},
invalidates: (result, { id }) => [{ type: 'Posts', id }],
}),
deletePost: build.mutation<{ success: boolean; id: number }, number>({
query(id) {
return {
url: `posts/${id}`,
method: 'DELETE',
};
},
invalidates: (_, id) => [{ type: 'Posts', id }],
}),
}),
});
JSON.stringify()
anywhereid
field exists and auto-provide
stuff, and handle things much like createEntityAdapter
does where you optionally provide a callback to extract the actual ID field if necessary?'items'
, we support POST /items/${id}
or something.method: 'PUT'
. More discussion questions:
What about code splitting? Right now this appears to require importing all of the API objects at app setup time in order to create the middleware. That would seem to preclude a code-split feature that is lazy-loaded.
Reading "all of the API objects", I think this is a misconception (which also explains some of your other assumptions): In general, an application should have one api object. Having multiple api objects would mean, really, querying different APIs, like one REST api and one GraphQL api.
But if you are talking with one monolith/microservice gateway that puts out 20 different types of entities that might somehow overlap, that should generally be handled by one api object. Ideally, these central api configurations can be generated from a swagger file or something, this is still on @msutkowski's list :)
A mutation to /post/12
might also invalidate queries that were made to /users/23
. A mutation to /comment/55
might also invalidate queries to /post/12
and so on. It makes really no sense splitting these up.
It would really be more interesting to try & "lazy-load" additional endpoints into one already-created api definition. Could be worth an experiment. It's just the question on how to type that.
Our examples are just using multiple apis to keep the examples as self-contained as possible right now.
This also concerns code splitting: we can (and probably will) come up with a way of post-loading apis into the store - but that would be an edge case. This is more in line with apollo that requires you to configure all your api's caching behaviour in one central place than with react-query that makes you litter your logic into just about any component.
I know that lazy-loading Redux features is a common-ish use case, so we need to come up with a story there.
Can the middleware be updated to accept adding one of these things at runtime?
Also, not sure what the latest state of the API is, but looking at the sandbox at codesandbox.io/s/rtk-simple-query-demo-migxd?file=/src/app/services/posts.ts , this still feels too verbose:
* I shouldn't have to call a `JSON.stringify()` anywhere
Shouldn't be necessary any more, but I think we forgot to adjust some typing somewhere, I encountered this yesterday as well.
* We ought to default to assuming we invalidate whatever data type is here, and have some kind of automatic support for invalidating an item by an ID somehow
This goes with what I said above: assume this API returns 15 different data types.
* Can we also default-assume that an `id` field exists and auto-`provide` stuff, and handle things much like `createEntityAdapter` does where you optionally provide a callback to extract the actual ID field if necessary?
You can also decide to skip provides
and invalidates
altogether (which would be react-query without manually invalidating stuff in the cache), just provide & invalidate plain string types (default behaviour of urql, seems "good enough" for most use cases) - or go all the way and provide & validate entities with ids.
Really doing this "by default" doesn't really do it though. We don't know the type of data that is coming back from that endpoint, we don't know if it's maybe multiple types and we don't know if it's nested.
* And really, if we're going to have REST-ish behavior supported out of the box, we almost might as well just provide auto-support for the standard methods and URL types. Like, I say my base URL is `'items'`, we support `POST /items/${id}` or something.
That could be implemented in the
fetchBaseQuery
, but I'd honestly rather not reimplement a second axios as part of this. Right now, this is just thefetch
API with some minor convenience tweaks. Doing any more than that would take a significant increase in bundle size. Also, it's kinda moot when people decide to change out the baseQuery for axios or something.* oh no, we've just reinvented the AngularJS resource class. I'm not even joking. But maybe we should look at that too? * it'd be nice if we could make the HTTP method stuff look more function-call-y? instead of `method: 'PUT'`.
That would remove the "transport type agnostic" part. Right now, whatever is returned from that query
there will be the argument into baseQuery
- so it has to return an object.
When I said "API object", I meant one of those "service" files like the posts
definition I pasted above.
Say I have postsService
and usersService
, and I want to lazy-load commentsService
when the user interacts with the page. Looks like that's not feasible atm, because we've gotta do this to set up the store:
export const createStore = (options?: ConfigureStoreOptions['preloadedState'] | undefined) =>
configureStore({
reducer: {
[counterApi.reducerPath]: counterApi.reducer,
[postApi.reducerPath]: postApi.reducer,
[timeApi.reducerPath]: timeApi.reducer,
},
middleware: (getDefaultMiddleware) =>
getDefaultMiddleware().concat(counterApi.middleware, postApi.middleware, timeApi.middleware),
...options,
});
the slice reducers could always be injected, but the need to add those middleware ahead of time seems like it makes code-splitting impossible.
Yeah, that was my point: that should not be something common.
postsService
, userService
and commentsService
should in 95% of cases really be one api object, as that data will probably be cross referencing. So one api object should be one full domain of concerns.
So lazy-loading (& hot-reloading) service definitions might actually be more important than lazy-loading additional api objects.
But yeah, the other case of lazy-loading completely different apis is on the list :)
Looking at this latest example code:
import { createApi, fetchBaseQuery } from "@rtk-incubator/rtk-query/dist";
export const api = createApi({
baseQuery: fetchBaseQuery({ baseUrl: "https://rickandmortyapi.com/api/" }),
entityTypes: ["Episode", "Character", "Location"],
endpoints: (build) => ({
getEpisodes: build.query({
query: () => "episode",
provides: (result = []) => [
...result.results.map(({ id }) => ({ type: "Episode", id })),
{ type: "Episode", id: "LIST" }
]
}),
getEpisode: build.query({
query: (id) => `episode/${id}`,
provides: (_, id) => [{ type: "Episode", id }]
}),
getCharacters: build.query({
query: () => `character`,
provides: (result) => [
...result.results.map(({ id }) => ({ type: "Character", id })),
{ type: "Character", id: "LIST" }
]
}),
getCharacter: build.query({
query: (id) => `character/${id}`,
provides: (_, id) => [{ type: "Character", id }]
}),
getLocation: build.query({
query: (id) => `location/${id}`,
provides: (_, id) => [{ type: "Location", id }]
})
})
});
This is definitely looking a lot better, but I don't like the provides
stuff:
provides
behavior. In particular, why not do it the way createEntityAdapter
does? Assume that item.id
is the ID field by default, pass an idSelector
into the initial setup if it's some other field. Extract the ID field with that selector as necessary here, and then generate a default provides thing.Or maybe at least have like a providesList()
or providesSingle()
helper?
yipes. this is really sounding like AngularJS Resources again...
https://docs.angularjs.org/api/ngResource/service/$resource
but seriously, being able to drop or generate the provides
stuff would shorten that considerably
Poked at the "Rick and Morty" demo for a bit:
https://codesandbox.io/s/ricky-and-morty-rtk-query-conversion-5gnel?file=/src/services/api.js
State structure is definitely looking better. A couple questions:
{
subscriptions: {
getEpisode: {
"episodes/5": uuid
}
}
}
Not necessarily saying that is better, just asking.
resultUnsubscribed
instead of unsubscribeResult
or something.pending/fulfilled
actions when you start fetching much data.I'm sure @phryneas will give more clarity regarding some of these questions tomorrow, but I can address a few.
what is that first argument that we keep ignoring?
The first argument is the result
, and the 2nd is the arg
given to the query
fn. The reason I skip it is that we know the arg
was most likely in the cache as such, so we ensure we're invalidating the right thing. In the case the API returned a different id
in the result, we could have unexpected behavior.
Regarding the verbosity of provides
, I forked that CSB and updated it to be an exact equivalent of react-query. https://codesandbox.io/s/ricky-and-morty-rtk-query-conversion-without-provides-wqi5t?file=/src/services/api.js. It behaves 100% the same regarding using cached results and now looks like this:
export const api = createApi({
baseQuery: fetchBaseQuery({ baseUrl: "https://rickandmortyapi.com/api/" }),
endpoints: (build) => ({
getEpisodes: build.query({
query: () => "episode"
}),
getEpisode: build.query({
query: (id) => `episode/${id}`
}),
getCharacters: build.query({
query: () => `character`
}),
getCharacter: build.query({
query: (id) => `character/${id}`
}),
getLocation: build.query({
query: (id) => `location/${id}`
})
})
});
Regarding the usability of provides... it's grown on me after some usage. Compare it to having to do something like this wherever you perform a mutation.
const [mutate, mutationState] = useMutation(patchTodo, {
onSuccess: (data) => {
// Update `todos` and the individual todo queries when this mutation succeeds
cache.invalidateQueries("todos");
},
});
Are the quotation marks in these cache keys intentional, or accidental?
They're intentional - we just stringify the args.
function defaultSerializeQueryArgs(args: any, endpoint: string) {
return `${endpoint}/${JSON.stringify(args)}`;
}
I'm not saying get rid of provides
completely - I'm just saying we should have some defaults for how it gets defined, either by assuming something about the return structure, or having utils for the cases of "provide a list of items by ID" and such. Like, this line:
...result.results.map(({ id }) => ({ type: "Episode", id })),
should be automatic, and we ought to be able to leave out the provides
option or just do provides: listById()
unless you really need to customize that behavior.
Just wanted to drop this in... I didn't make any code improvements at all and kept the same behavior... but this is looking pretttttty good. https://github.com/msutkowski/rtk-query-rq-custom-hooks-conversion/pull/1/files.
Just to toss a thought out there:
how settled are we on the name rtk-query
at this point?
I think it's reasonable, but I do kinda worry people will look at that and go "oh, you're ripping off React-Query".
which in a way we are :) but not on a strict 1:1 basis.
any other good name ideas we should consider?
Well, it's about queries and mutations, so "query" is kinda close-ish. But.. what about "qumut"? Like queries & mutations?
doesn't exactly roll off the tongue :)
Say it three times and you'll never stop saying it because it is so fun :p
Otherwise: rtk-query-cache
? Probably selling it too low. rtk-remote? rtk-api?
"rtk-query" is certainly short and easy to remember. I just don't want folks to turn this into a "YOU RIPPED OFF REACT-QUERY" thing.
Hm. Pretty easy to counter. "react-query is more similar to swr than rtk-query is to react-query". :shrug:
heh, sure. let's stick with rtk-query
until someone yells at us loudly enough to make us seriously reconsider.
besides, in theory we want to merge this into RTK itself, although I was talking with @msutkowski last night about the pros and cons of doing that vs keeping it as a separate lib long-term.
Yeah. I think RTK would become too big from the perspective of someone just looking at bundlephobia if we were to directly include it. Size would probably double.
What I could imagine would be to split RTK up in general, like other tools started doing it:
On the other hand, that is what tree-shaking does. So no idea it it would be useful.
I'm not keen on the idea of splitting up RTK. Part of the point of RTK is that it is the "all-in-one package" - add it, get all the pieces you need.
And yes, RTK should tree-shake correctly, and Bundlephobia is smart enough to show that:
https://bundlephobia.com/result?p=@reduxjs/toolkit@1.4.0
Mostly, anyway - I'm not convinced that all those exports each add 10.3K (like, configureStore
? really?) Maybe something we ought to double-check.
But Bundlephobia does at least say up front that it's tree-shakeable, which is good.
As for size, this is what size-limit --why
gives me. So we're at 17.3kb including RTK, redux and immer
Oh, wow. Yeah, I was thinking that 17K was the size of RTK-Query in addition to RTK.
In that case I'd say we're doing pretty well size-wise :)
Would still be nice to poke into this a bit further, but we should be okay for now.
@markerikson I used a bitwise operator in fetchBaseQuery
if that counts for anything.
Oh, wow. Yeah, I was thinking that 17K was the size of RTK-Query in addition to RTK.
I thought so, too :D
"rtk-query" is certainly short and easy to remember. I just don't want folks to turn this into a "YOU RIPPED OFF REACT-QUERY" thing.
May I suggest:
I'm not saying get rid of
provides
completely - I'm just saying we should have some defaults for how it gets defined, either by assuming something about the return structure, or having utils for the cases of "provide a list of items by ID" and such. Like, this line:
...result.results.map(({ id }) => ({ type: "Episode", id })),
should be automatic, and we ought to be able to leave out the
provides
option or just doprovides: listById()
unless you really need to customize that behavior.
I (and many of my colleagues) have experience with badly written APIs that return entities without IDs, worth having a default that assumes item.id (for simple cases) but also a way to add custom configuration (for cases where ids are not there). I had to fight with Apollo Client, that didn't provide anything like this (as far as I could understand) and would be a beneficial bonus to add to a comparison table.
May I suggest:
- "rtk-request"
- "rtk-fetch"
- "rtk-data"
Hm. I'd dismiss rtk-fetch
, as that would imply that this relies on fetch - which it not neccessarily does. rtk-data
seems a little off-mark as well, as that could just mean anything. I mean, redux itself is doing "data stuff" :wink: .
As for rtk-request
, I could see that working in general. But rtk-query
hints at the "requests & mutations" disabiguation, so I like that a little more, even when there are multiple -query
libraries out ther ( there is also redux-query). I mean, realistically at this point, all names are already taken.
I'm not saying get rid of
provides
completely - I'm just saying we should have some defaults for how it gets defined, either by assuming something about the return structure, or having utils for the cases of "provide a list of items by ID" and such. Like, this line:...result.results.map(({ id }) => ({ type: "Episode", id })),
should be automatic, and we ought to be able to leave out theprovides
option or just doprovides: listById()
unless you really need to customize that behavior.I (and many of my colleagues) have experience with badly written APIs that return entities without IDs, worth having a default that assumes item.id (for simple cases) but also a way to add custom configuration (for cases where ids are not there). I had to fight with Apollo Client, that didn't provide anything like this (as far as I could understand) and would be a beneficial bonus to add to a comparison table.
I'm still pretty oppsed to that general idea. I mean, take a look at these apis:
// request:
GET /user/5/name
// response:
Alice
// provided:
[{ type: "User", id: 5 }]
// request:
GET /users
// response:
[ { id: 5, name: "Alice"}, { id: 7, name: "Bob"} ]
// provided:
[{ type: "User", id: 5}, { type: "User", id: 7 }]
// request:
GET /users
// response:
{ users: [ { id: 5, name: "Alice"}, { id: 7, name: "Bob"}] }
// provided:
[{ type: "User", id: 5}, { type: "User", id: 7 }]
// request:
POST /graphql
{ users: { nodes: { nodeId, name } }
// response:
{ users: { nodes: [ { __typename: "User", nodeId: 5, name: "Alice"}, { __typename: "User", nodeId: 7, name: "Bob"}] } }
// provided:
[{ type: "User", id: 5}, { type: "User", id: 7 }]
// request:
GET /post/3
// response:
{ id: 3, title: "This is a blog post", comments: [ { id: 4, value: "interesting!", user: { id: 5, name: "Alice" } ] }
// provided:
[{ type: "Post", id: 3}, { type: "Comment", id: 4}, { type: "User", id: 5 }]
and these are all very sane APIs. Reality will find different ways.
We either would have to provide 10 different helpers to deal with that or 3-4 helpers which are highly configurable. Documenting all that (and for the user to read all that documentation) and having peoply "tune" one of those helpers to their needs will be a lot of work, with very little gain - "tuning" one such helper to the own specific needs is probably the same amount of code as writing such a helper oneself.
I see. Maybe I'm missing something, how would one fit the case when no ID is sent by the API? (of course I could pick a different unique propriety of the response as an ID, but let's say there aren't any).
If you are requesting something by id, but do not get one in response, you can just use the original requested arguments for that. Apart from that: you can just also leave the id out. In that case, you would just have an entity type that has no ids. Every mutation modifying/creating/deleting such an element would just invalidate & re-fetch all currently used queries providing something with said EntityType. You can also mix&match "no ids" with "ids" in one entity type. Or just "think of" an id that encompasses all entities for a certain endpoint, like 'LIST'
Just to be 100% clear here: all this is just used for invalidation & automatic refetching. If you don't care about that feature, you wouldn't have to care about entityTypes, ids, provides & invalidates at all.
Great, gotcha!
I'm not sure if it's a good place but I wanted to ask about the cache updates after mutations.
The examples on the RTK query site (for example this example) refetch posts after any mutation.
Would there be a way to avoid that? In my use case a perfect API would be provide an action after mutation that would update a store - I don't see a point of making a query if all of the information is known - this is the case for CRUD actions.
I understand there might be a more complex cases but having a possibility to specify actions and avoid refetch would be helpful.
For a context, I am looking at a possiblity to migrate from apollo where I do a lot of manual cache updates after mutations because of the nature of apollo.
You can do that with https://rtk-query-docs.netlify.app/concepts/optimistic-updates - but you'll have to do that by hand. We don't have (and won't add) an automatism for that though. I'm using apollo client daily for 3 years now and I've found so many edge cases where that fails, even with a 100% normalized API like graphql, so it is pretty much impossible to support that use case on a library level for general REST apis - we don't really want to go down that rabbit hole.
@phryneas Thanks - I totally understand that you don't want to support that option. I think what I actually need is not optimistic updates (I want to update UI after mutation response comes back from the server not before - and that's how I understand optimistic updates). However I can now see that there's onSuccess
handler that has mutationApi
object with dispatch
so that should be enough.
So, based on your experience working with apollo, would you recommend using refetch instead manually updating cache?
@apieceofbart yes, essentially you would use the tools from optimistic updates (request lifecycle handlers - onSuccess in your case & api.util.updateQueryResult
) - just for another mechanism.
And yes, personally, I would just refetch invalidated data instead of trying to update it.
Most api requests are pretty cheap and it saves a lot of development time and edge-case hunting to do this a bit more "naive" approach.
Urql is going this way by default and it felt a lot more developer-friendly for me.
I'd like to announce a new lib to make RTK Query works (with auto-generated hooks!) in Angular with NgRx. It is a 100% functional version as indicated in the RTK Query guide.
Small guide, example & source code are available here: https://github.com/SaulMoro/ngrx-rtk-query
@SaulMoro that's awesome! Really neat to see people building on this idea!
I'm closing everything here now. If there's any need for further discussion, that should take place in a new issue over at https://github.com/reduxjs/redux-toolkit/
A few random thoughts for discussion:
I realize that having predefined selectors is supposed to encapsulate that, but it seems kinda painful to deal with if you need to poke at it manually.