If you have two observables for the same request, it will unnecessarily do extra (expensive) work
maybeRebroadcast will diff, denormalize, and then require parsing again
This is more substantial since this gets fired on every network response and cache write
fetch(policy.cache) will need to denormalize and require parsing again
This will cause a flash of delay when attaching to an observable that already has data for it
ex: two observables of the GetUser query for the same resource (id: xyz)
Solution
I went with a solution that doesn't require changing any core flows or data structures. Slightly based on how Ferry keeps a map of requests.
The more observables you have for the same Requests, the bigger improvements you will see
_getQueryResultByRequest - Instead of introducing a new cache, we iterate through the queries array to find if any queries match the Request
When we maybeRebroadcast for a fetch from cache, we check this first
The new copyWith on QueryResult lets us re-use the bulk of the existing query result (including the parsed data)
Other improvement added - call maybeRebroadcast less
Don't call it on failures (since the cached data didn't change)
This requires setting carryForwardDataOnException: false in your Options. I've found that behavior to match my expectations for failures
Addendum
I think the broader issue (and motivation for this) was that maybeRebroadcast uses a brute force approach to notifying of changes and relies on synchronicity (which definitely has its pros - like readability and reducing bugs). Which has led to jank/missed frames in our application
There are small improvements that could be made like
Throttling/debouncing/queuing calls to maybeRebroadcast (based on this)
Memoizing the normalization (since that is 80% of the remaining cost of maybeRebroadcast)
A short term solve would be making maybeRebroadcast (and normalize) async so at least it isn't holding up the main thread the entire time (atomically)
The other option would be allowing the entire GraphQL client (and it's cache) to live in an isolate. 80ms every call to maybeRebroadcast is only unacceptable because it's on the main thread. It'd be fine on a separate thread.
Problem
If you have two observables for the same request, it will unnecessarily do extra (expensive) work
maybeRebroadcast
will diff, denormalize, and then require parsing againfetch(policy.cache)
will need to denormalize and require parsing againxyz
)Solution
I went with a solution that doesn't require changing any core flows or data structures. Slightly based on how Ferry keeps a map of requests.
The more observables you have for the same
Request
s, the bigger improvements you will see_getQueryResultByRequest
- Instead of introducing a new cache, we iterate through thequeries
array to find if any queries match theRequest
maybeRebroadcast
for afetch
from cache, we check this firstcopyWith
onQueryResult
lets us re-use the bulk of the existing query result (including the parsed data)Other improvement added - call maybeRebroadcast less
carryForwardDataOnException: false
in yourOptions
. I've found that behavior to match my expectations for failuresAddendum
maybeRebroadcast
uses a brute force approach to notifying of changes and relies on synchronicity (which definitely has its pros - like readability and reducing bugs). Which has led to jank/missed frames in our applicationmaybeRebroadcast
)maybeRebroadcast
is only unacceptable because it's on the main thread. It'd be fine on a separate thread.