Closed k-ode closed 3 years ago
It would probably be better if we cached snapshots, instead of entire queries. If so we also need to store a map of every identifier found in the query to be cached, otherwise models might be garbage collected prematurely.
{ groupBy: 'company' } => [querySnapshot, queryIdentifiers]
Today, caching works well for simple queries. But it lacks flexibility when dealing with more advanced use cases.
Imagine a query that fetches incremental data for a table with infinite scroll. If we refetch this query with new variables, like sorting the data by a different column, how do we cache the previous value? (Currently we don't).
One way to handle this is to cache responses by the request variables passed to the query. Like so:
{ groupBy: 'company' } => cachedResponse1
{ groupBy: 'age' } => cachedResponse2
This would already work today if we cache queries any time the user call
run
!But then we're in trouble if we try to use an infinite list. Caching an infinite list should ideally return all items that we've fetched while scrolling. But right now we get this:
{ groupBy: 'company' , offset: 0, first: 50 } => cachedResponse1
{ groupBy: 'company', offset: 50, first: 50 } => cachedResponse2
So we need some way of handling infinite list arguments differently from our other variables. Also,
queryMore
should not create a new cache entry, just amend the existing one with new items.I propose the following changes:
run
with new variables, we clone the query and put the old one in our queryCache, marked as stale. This query is then deleted in the usual way after a certain timeout.invalidate
mechanism that will either run refetch on a query if it's active, or remove it if it's stale.I'll post some examples when I work out the shape of the api.