Currently, we do not employ any means of caching. (Almost) every time a screen is visited, an HTTP request to the server is made. In many cases this is redundant. However, this feature is relatively complex and will take some time to plan, design and implement.
We have roughly the following (high-level) requirements:
The main goal is to avoid redundant HTTP requests - the number of service/provider invocations is negligible.
We need to be able to store arbitrary Dart data structures in the store (not only primitive data types).
We need a way to pass through the cache and force updates. For instance, "pulling down" via RefreshIndicators should always bypass the cache, invalidate and update the respective item.
This also creates the requirement of being able to mutate/modify already cached values.
We need some way to specify a retention period (time to live): Cached items should not live for all eternity, but be evicted after a reasonable amount of time. Another sensible retention policy is also imaginable. This is more important than one might think at first glance. Many users often keep their mobile apps running for a very long time (and don't really close them). Without a retention period mechanism or other eviction policies, they would (almost) never get updates.
Regarding the design, there are some aspects to consider:
The changes must not disrupt any currently properly functioning workflow. We need to be very careful not to skip any change notifications (notifyListeners()), miss updates or the like by implementing caching. Thorough (manual) system tests are necessary!
It is not decided yet whether the retention policy and its parameters should be configurable only globally, only per-instance or globally and (partly) overridable on a per-instance basis. A further requirement analysis needs to be conducted at the time of implementation.
At the current point in time, it is not clear whether the caching layer should be underneath the global state layer (i.e. providers do cache-lookups) or underneath the service layer (i.e. service classes do cache-lookups). It needs to be examined which is the wiser decision at the time of implementation.
Concerning the implementation, we basically have three options:
Implement the caching ourselves
A possible (very naive, rough and incomplete!) idea of the implementation could be:
introduce abstract CachingProvider or CachingService which extends BaseProvider or BaseService, respectively
add single (most probably generic) function getOrCreate<T, U>(Future<T> request, bool Function() isCached, void Function(T item) update, U Function(T response) valueReturner)
instead of directly invoking the service, getting the response and writing to the provider-local variables, invoke getOrCreate:
request: The call (indirectly) making the HTTP request; is most probably async and returns an instance of some type T.
isAlreadyAvailable: Based on the method parameters (which are accessible to the lambda), determines whether a HTTP call is necessary (false) or not (true).
update: Any post-response work to be done with the response item of type T (e.g. logging in service, writing to local variables in provider).
valueReturner: A function that returns the value to be returned from the cache-invoking method. A new generic type K because it is often required to map HTTP responses to some other types. However, there are no restrictions set, i.e. K may be equal to T in some cases.
This is quite simplistic approach and has not been thoroughly thought through yet. It might give an initial idea of the requirements, though.
Use a third-party library
For this second option we need to remember that our ultimate goal is to "save" network calls whenever an equivalent call (i.e. same parameters - HTTP method, URL, query parameters, payload etc.) has already been conducted before. Potentially fitting third-party libraries should be evaluated from that viewpoint.
Below is a list of possibly fitting libraries which have not been vetted rigorously. Instead, they ought to give a starting point for an investigation.
lazy_evaluation: very simple lazy initializer, barely more features than late inits
cached_value: caching with factory lambdas, built-in time-to-live feature
memoize: archetypical memoization - you cache function calls (with their parameters) directly instead of their return values
lazy_memo: provides multiple kinds of lazy variables, updates to them as well as memoized functions
(keywords used to search for suitable libraries: cache, repository, memoization, lazy)
Use a hybrid approach
We could also build our own workflow for accessing the cache, but use a third-party library as data structure/data management solution. This might offer possibilities since there is probably no single library that satisfies all of our requirements. A cleanly self-designed caching layer with third-party data access might be a very well-maintainable solution.
All in all, we also need to keep in mind the memory footprint, levels of indirection and complexity introduced by our choice of implementation. We should strive for an optimal balance between them.
In GitLab by @11775820 on Feb 16, 2022, 21:44
Currently, we do not employ any means of caching. (Almost) every time a screen is visited, an HTTP request to the server is made. In many cases this is redundant. However, this feature is relatively complex and will take some time to plan, design and implement.
We have roughly the following (high-level) requirements:
Dart
data structures in the store (not only primitive data types).RefreshIndicator
s should always bypass the cache, invalidate and update the respective item.Regarding the design, there are some aspects to consider:
notifyListeners()
), miss updates or the like by implementing caching. Thorough (manual) system tests are necessary!Concerning the implementation, we basically have three options:
Implement the caching ourselves
A possible (very naive, rough and incomplete!) idea of the implementation could be:
CachingProvider
orCachingService
which extendsBaseProvider
orBaseService
, respectivelygetOrCreate<T, U>(Future<T> request, bool Function() isCached, void Function(T item) update, U Function(T response) valueReturner)
getOrCreate
:request
: The call (indirectly) making the HTTP request; is most probablyasync
and returns an instance of some typeT
.isAlreadyAvailable
: Based on the method parameters (which are accessible to the lambda), determines whether a HTTP call is necessary (false
) or not (true
).update
: Any post-response work to be done with the response item of typeT
(e.g. logging in service, writing to local variables in provider).valueReturner
: A function that returns the value to be returned from the cache-invoking method. A new generic typeK
because it is often required to map HTTP responses to some other types. However, there are no restrictions set, i.e.K
may be equal toT
in some cases.This is quite simplistic approach and has not been thoroughly thought through yet. It might give an initial idea of the requirements, though.
Use a third-party library
For this second option we need to remember that our ultimate goal is to "save" network calls whenever an equivalent call (i.e. same parameters - HTTP method, URL, query parameters, payload etc.) has already been conducted before. Potentially fitting third-party libraries should be evaluated from that viewpoint.
Below is a list of possibly fitting libraries which have not been vetted rigorously. Instead, they ought to give a starting point for an investigation.
late
inits(keywords used to search for suitable libraries:
cache
,repository
,memoization
,lazy
)Use a hybrid approach
We could also build our own workflow for accessing the cache, but use a third-party library as data structure/data management solution. This might offer possibilities since there is probably no single library that satisfies all of our requirements. A cleanly self-designed caching layer with third-party data access might be a very well-maintainable solution.
All in all, we also need to keep in mind the memory footprint, levels of indirection and complexity introduced by our choice of implementation. We should strive for an optimal balance between them.