Closed aaronkchsu closed 9 months ago
We have a similar issue with basically an identical setup. The response object keeps growing and growing, despite there being cached values they are always returned from the server.
Having the same issue here.
Same issue here, any updates from the team?
Hi all, we are taking a fresh look at server-side rendering for release 3.8. See https://github.com/apollographql/apollo-client/issues/10231 for more details and please feel free to provide feedback 🙏🏻 As we get into the implementation details we'll have a better sense of how to tackle memory management in this new paradigm. Thanks for your patience!
Awesome thanks for the update and for all the movement happening on the project!!
Hi, 🙂
Did anybody successfully solved the issue?
We are facing the same issue and we really need to solve this. 🙁
Though I agree that memory consumption is very high, I wouldn't call this a memory "leak", it is just that, imho, unexpected behaviour is happening...
Judging from the documentation, one would expect to be able to limit the cache used by ApolloClient with the resultCacheMaxSize
property. This appears to be a setting used by the optimism
dependency. And when monitoring that cache, it indeed limits itself to the set amount of entries. But that doesn't seem the whole story.
Correct me if I'm wrong here, but ApolloClient uses a cache through the EntityStore, which relies on the Trie
class of @wry/trie
. And that's the one making our Node JS process run out of memory, as it doesn't seem to be limited anywhere and it doesn't seem to be gc'ed any time automatically. The EntityStore cache appears to store a full layout of the client's received responses as objects in instances of Trie-objects.
In our case we run an ecommerce website in NextJS with a lot of products for different locales, and all this data is stacked in the EntityStore's Root. This is very convenient, but consumes so much memory that the process dies after a couple of hours: When requesting a couple of thousand Product Detail Pages that were not pre-built by NextJS (or are invalidated at almost the same time), ApolloClient (running within the NextJS server) will fetch a lot of data and store it in the EntityStore.
I decided to go for a combination of configuration changes that seem to be related:
import { ApolloClient, createHttpLink, InMemoryCache } from '@apollo/client';
const link = createHttpLink({
uri: process.env.GRAPHQL_ENDPOINT || 'http://localhost:5000/',
credentials: 'include',
});
const cache = new InMemoryCache({
resultCacheMaxSize: 10_000,
typePolicies: {
CommonData: {
keyFields: false,
},
Cart: {
keyFields: false,
},
Wishlist: {
keyFields: false,
},
},
});
const apolloClient = new ApolloClient({
ssrMode: typeof window === 'undefined',
link,
name: 'storefront',
version: '1.0',
cache,
defaultOptions: {
mutate: {
fetchPolicy: 'no-cache',
},
query: {
fetchPolicy: typeof window === 'undefined' ? 'no-cache' : 'cache-first',
errorPolicy: 'all',
},
watchQuery: {
fetchPolicy: typeof window === 'undefined' ? 'no-cache' : 'cache-first',
errorPolicy: 'all',
},
},
});
And an additional setInterval
to explicitly call gc
on the cache:
setInterval(() => {
cache.gc({ resetResultCache: true, resetResultIdentities: true });
}, 1_000 * 60 * 15);
This seems to mitigate the change of running out of memory due to huge amounts of instances of Trie
s.
FYI:
"@apollo/client": "3.7.11",
"next": "12.3",
I too have been investigating a memory leak in an SSR context, and found retained objects that stick around even after manual garbage collection (via chrome devtools). These are strings and objects related to variables of a graphql query (client.query()
). I followed these up retaining tree up to canonicalStringify
, and noticed both retainers of WeakMap
entries, and Trie
.
What stuck out to me was that the Trie
was using strong Map
s underneath. I inspected one of these objects, and saw that while weakness
was set to true on the Trie
, it had a strong
field (of type Map
).
See here:
I haven't fully wrapped my head around how canonicalStringify
is supposed to work (or @wry/trie
), but this looks unexpected to me, and my assumption is that these strong Map
instances are the reason they are not being GCed from the WeakMap
and Trie
references.
As mentioned in https://github.com/apollographql/apollo-client/issues/9699#issuecomment-1498885263, calling cache.gc()
(on any InMemoryCache
instance) clears this memory.
@joekur you might be interested in https://github.com/apollographql/apollo-client/pull/11254 which did some work to address the memory overhead that canonicalStringify
currently utilizes. That change was released in 3.9.0-alpha.2
. Feel free to give that a shot and see if you get some better results!
@jerelmiller was just giving that a look. Gave it a local test, and it does indeed look to fix the issue. Nice work! 👏
@jerelmiller when can we expect a 3.9.0 release?
At this point we are targeting a release candidate late November/early December, and a final release about one or two weeks after that unless we get feedback that delays us.
Hey everyone! We released the beta containing our memory story this week - you can read all about it in the announcement blog post.
We would be very grateful if you could try it out and report us your cache measurements so we can dial in the right default cache limits (more details in the blogpost) :)
@phryneas any updated ETA on the 3.9.0 stable release?
@joekur we just shipped RC.1, and if nothing else comes up I would guess about a week.
We have recently released Apollo 3.9 which I believe should have fixed this issue - so I'm going to close this.
If this issue keeps persisting in Apollo Client >= 3.9.0, please open a new issue!
Do you have any feedback for the maintainers? Please tell us by taking a one-minute survey. Your responses will help us understand Apollo Client usage and allow us to serve you better.
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. For general questions, we recommend using StackOverflow or our discord server.
Official Apollo and NextJS recommendations are about to create a new ApolloClient instance each time when the GraphQL request should be executed in case if SSR is used.
This shows good results by memory usage, memory grows for some amount and then resets with the garbage collector to the initial level.
The problem is that the initial memory usage level constantly grows and the debugger shows that leaking is caused by the "InMemoryCache" object that is attached to the ApolloClient instance as cache storage.
We tried to use the same "InMemoryCache" instance for the all new Apollo instances and tried to disable caching customizing policies in "defaultOptions", but the leak is still present.
Is it possible to turn off cache completely? Something like setting a "false" value for the "cache" option in ApolloClient initialization? Or maybe it's a known problem with a known solution and could be solved with customization of the "InMemoryCache"?
We tried numerous options, such as force cache garbage collection, eviction of the objects in the cache, etc., but nothing helped, the leak is still here.
Thank you!
On page