Open dkempner opened 1 year ago
@dkempner Thanks for posting this issue. I was curious, what are you using to show that network latency diagram? I just cloned your reproduction and was taking a look at this problem.
@jpvajda it's not network latency it's 300ms of CPU time (mosly from Apollo's cache), since in the template repo all data is local to the Apollo cache.
I use chrome devtools "Performance" tab
Hit "record" button and then click the "Click Me" button in my reproduction app, then hit stop recording.
I reproduced this with Firefox Developer edition as well.
@alessbell I'm curious if you had any thoughts on what might be occurring here? cc @bignimbus
(sorry to interject)
My guess is it's probably interacting with the cache often. #10270 mentions the cache as a bottleneck with useQuery
and I suspect this is in the same ballpark.
Yes it's definitely all about InMemoryCache
's performance. We found a few bottlenecks, especially in writes where broadcastWatches
iterates over every watcher on the page for every write.
It became a problem such that we needed to build a DocumentCache
replacement which is not normalized and only handles caching at the query
+ variables
level. This allows us to not broadcast so widely when new data is written to cache.
In my reproduction repo's example, it's not cache writing that causes pain (because there isn't any in this app), it's mostly just the initialization of ObservableQuery
s.
The reason this is important to me is that Apollo claims that it can be a state management library. If you use it naively like a React context it doesn't scale terribly far. You need to build abstractions on top of it to cache data in other ways.
I've encountered similar scalability issues when using a large amount (around the scale of thousands) of useQuery
hooks pointing to the same few query documents and variables.
In this repro, I set up 2000 useQuery
hooks using the same query, resulting in a full minute's wait on my device (using Chromium 113.0.5672.126) before the React application finishes updating. ~14 seconds are devoted to React updates, while over 30 seconds are spent calling InMemoryCache.makeCacheKey
. I suspect that this is due to the Trie
used to create cache keys getting stuffed with a massive amount of unique argument keys from the watch objects associated with each useQuery
application.
I've set up a branch on my apollo-client fork that seeks to address this specific scenario by making all QueryInfo
watch callbacks referentially consistent and deduplicating maybeBroadcastWatch
calls by watch-derived cache key. This helps greatly reduce Trie
size in InMemoryCache.makeCacheKey
and reduces the number of times maybeBroadcastWatch
gets called (up to a factor of the total number of useQuery
hooks used) in the event that a large amount of duplicate useQuery
hooks are used.
Hi @dkempner, you mentioned building a DocumentCache
alternative, which disables document normalization and should thus improve performance there. Is there any chance you could open-source this? I think a non-normalizing in-memory cache would be a good alternative for people running into this issue. Thanks!
Edit: it seems like here is the file: https://github.com/dkempner/simple-cache/blob/main/lib/DocumentCache.ts Thanks for making this available!
we traced down heavy apollo use as causing cpu spikes and significant performance impact:
the following functions stood out in traces (no source maps, so i don't have the source in apollo yet)
Intended outcome: Using the same
useQuery
throughout many components should have relatively low performance impact.Actual outcome: Long task is created when rendering components with many
useQuery
s.How to reproduce the issue: Reproduction Repo
Versions