apollographql / apollo-client

:rocket:  A fully-featured, production ready caching GraphQL client for every UI framework and GraphQL server.
https://apollographql.com/client
MIT License
19.33k stars 2.65k forks source link

Apollo Client + NextJS, getServerSideProps memory leak, InMemoryCache #9699

Closed aaronkchsu closed 7 months ago

aaronkchsu commented 2 years ago

Official Apollo and NextJS recommendations are about to create a new ApolloClient instance each time when the GraphQL request should be executed in case if SSR is used.

This shows good results by memory usage, memory grows for some amount and then resets with the garbage collector to the initial level.

The problem is that the initial memory usage level constantly grows and the debugger shows that leaking is caused by the "InMemoryCache" object that is attached to the ApolloClient instance as cache storage.

We tried to use the same "InMemoryCache" instance for the all new Apollo instances and tried to disable caching customizing policies in "defaultOptions", but the leak is still present.

Is it possible to turn off cache completely? Something like setting a "false" value for the "cache" option in ApolloClient initialization? Or maybe it's a known problem with a known solution and could be solved with customization of the "InMemoryCache"?

We tried numerous options, such as force cache garbage collection, eviction of the objects in the cache, etc., but nothing helped, the leak is still here.

Thank you!

export function createApolloClient(apolloCache) {
    return new ApolloClient({
        cache: apolloCache,
        connectToDevTools: !!isBrowser,
        link: ApolloLink.from([
            errorLink,
            authLink,
            createUploadLink({ credentials: "include", uri }),
        ]),
        ssrMode: !isBrowser,
        typeDefs,
        resolvers,
        defaultOptions: {
            watchQuery: {
                fetchPolicy: "cache-first",
                errorPolicy: "all",
            },
            query: {
                fetchPolicy: "cache-first",
                errorPolicy: "all",
            },
        },
    });
}

export function initializeApollo(initialState = {}) {
    const _apolloCache = apolloCache || createApolloCache();

    if (!apolloCache) apolloCache = _apolloCache;

    // console.log("APOLLO_CACHE", apolloCache);
    // apolloCache.evict();
    apolloCache.gc();

    const _apolloClient = apolloClient || createApolloClient(_apolloCache);

    // For SSG and SSR always create a new Apollo Client
    if (typeof window === "undefined") return _apolloClient;
    // Create the Apollo Client once in the client
    if (!apolloClient) apolloClient = _apolloClient;

    return _apolloClient;
}

export function useApollo(initialState) {
    const store = useMemo(() => initializeApollo(initialState), [initialState]);
    return store;
}

On page

export async function getServerSideProps(context) {
    const apolloClient = initializeApollo();
    const biteId =
        (context.query && context.query.id) ||
        (context.params && context.params.id);

    const realId = delQuery(biteId);

    try {
        await apolloClient.query({
            query: FETCH_BLERP,
            variables: { _id: realId },
            ssr: true,
        });
    } catch (err) {
        console.log("Failed to fetch blerp");
    }

    return {
        props: {
            _id: realId,
            initialApolloState: apolloClient.cache.extract(),
        },
    };
}
Falco-Boehnke commented 2 years ago

We have a similar issue with basically an identical setup. The response object keeps growing and growing, despite there being cached values they are always returned from the server.

jeffminsungkim commented 1 year ago

Having the same issue here.

mcousillas6 commented 1 year ago

Same issue here, any updates from the team?

bignimbus commented 1 year ago

Hi all, we are taking a fresh look at server-side rendering for release 3.8. See https://github.com/apollographql/apollo-client/issues/10231 for more details and please feel free to provide feedback 🙏🏻 As we get into the implementation details we'll have a better sense of how to tackle memory management in this new paradigm. Thanks for your patience!

aaronkchsu commented 1 year ago

Awesome thanks for the update and for all the movement happening on the project!!

samueldusek commented 1 year ago

Hi, 🙂

Did anybody successfully solved the issue?

We are facing the same issue and we really need to solve this. 🙁

y-a-v-a commented 1 year ago

Though I agree that memory consumption is very high, I wouldn't call this a memory "leak", it is just that, imho, unexpected behaviour is happening...

Judging from the documentation, one would expect to be able to limit the cache used by ApolloClient with the resultCacheMaxSize property. This appears to be a setting used by the optimism dependency. And when monitoring that cache, it indeed limits itself to the set amount of entries. But that doesn't seem the whole story.

Correct me if I'm wrong here, but ApolloClient uses a cache through the EntityStore, which relies on the Trie class of @wry/trie. And that's the one making our Node JS process run out of memory, as it doesn't seem to be limited anywhere and it doesn't seem to be gc'ed any time automatically. The EntityStore cache appears to store a full layout of the client's received responses as objects in instances of Trie-objects. In our case we run an ecommerce website in NextJS with a lot of products for different locales, and all this data is stacked in the EntityStore's Root. This is very convenient, but consumes so much memory that the process dies after a couple of hours: When requesting a couple of thousand Product Detail Pages that were not pre-built by NextJS (or are invalidated at almost the same time), ApolloClient (running within the NextJS server) will fetch a lot of data and store it in the EntityStore.

I decided to go for a combination of configuration changes that seem to be related:

import { ApolloClient, createHttpLink, InMemoryCache } from '@apollo/client';

const link = createHttpLink({
  uri: process.env.GRAPHQL_ENDPOINT || 'http://localhost:5000/',
  credentials: 'include',
});

const cache = new InMemoryCache({
  resultCacheMaxSize: 10_000,
  typePolicies: {
    CommonData: {
      keyFields: false,
    },
    Cart: {
      keyFields: false,
    },
    Wishlist: {
      keyFields: false,
    },
  },
});

const apolloClient = new ApolloClient({
  ssrMode: typeof window === 'undefined',
  link,
  name: 'storefront',
  version: '1.0',
  cache,
  defaultOptions: {
    mutate: {
      fetchPolicy: 'no-cache',
    },
    query: {
      fetchPolicy: typeof window === 'undefined' ? 'no-cache' : 'cache-first',
      errorPolicy: 'all',
    },
    watchQuery: {
      fetchPolicy: typeof window === 'undefined' ? 'no-cache' : 'cache-first',
      errorPolicy: 'all',
    },
  },
});

And an additional setInterval to explicitly call gc on the cache:

setInterval(() => {
  cache.gc({ resetResultCache: true, resetResultIdentities: true });
}, 1_000 * 60 * 15);

This seems to mitigate the change of running out of memory due to huge amounts of instances of Tries.

FYI:

"@apollo/client": "3.7.11",
"next": "12.3",
joekur commented 11 months ago

I too have been investigating a memory leak in an SSR context, and found retained objects that stick around even after manual garbage collection (via chrome devtools). These are strings and objects related to variables of a graphql query (client.query()). I followed these up retaining tree up to canonicalStringify, and noticed both retainers of WeakMap entries, and Trie.

What stuck out to me was that the Trie was using strong Maps underneath. I inspected one of these objects, and saw that while weakness was set to true on the Trie, it had a strong field (of type Map).

See here: Screenshot_2023-10-19_at_11_12_02 AM-2 Screenshot 2023-10-19 at 11 21 54 AM

I haven't fully wrapped my head around how canonicalStringify is supposed to work (or @wry/trie), but this looks unexpected to me, and my assumption is that these strong Map instances are the reason they are not being GCed from the WeakMap and Trie references.

As mentioned in https://github.com/apollographql/apollo-client/issues/9699#issuecomment-1498885263, calling cache.gc() (on any InMemoryCache instance) clears this memory.

jerelmiller commented 11 months ago

@joekur you might be interested in https://github.com/apollographql/apollo-client/pull/11254 which did some work to address the memory overhead that canonicalStringify currently utilizes. That change was released in 3.9.0-alpha.2. Feel free to give that a shot and see if you get some better results!

joekur commented 11 months ago

@jerelmiller was just giving that a look. Gave it a local test, and it does indeed look to fix the issue. Nice work! 👏

joekur commented 10 months ago

@jerelmiller when can we expect a 3.9.0 release?

phryneas commented 10 months ago

At this point we are targeting a release candidate late November/early December, and a final release about one or two weeks after that unless we get feedback that delays us.

phryneas commented 9 months ago

Hey everyone! We released the beta containing our memory story this week - you can read all about it in the announcement blog post.

We would be very grateful if you could try it out and report us your cache measurements so we can dial in the right default cache limits (more details in the blogpost) :)

joekur commented 8 months ago

@phryneas any updated ETA on the 3.9.0 stable release?

phryneas commented 8 months ago

@joekur we just shipped RC.1, and if nothing else comes up I would guess about a week.

phryneas commented 7 months ago

We have recently released Apollo 3.9 which I believe should have fixed this issue - so I'm going to close this.

If this issue keeps persisting in Apollo Client >= 3.9.0, please open a new issue!

github-actions[bot] commented 7 months ago

Do you have any feedback for the maintainers? Please tell us by taking a one-minute survey. Your responses will help us understand Apollo Client usage and allow us to serve you better.

github-actions[bot] commented 6 months ago

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. For general questions, we recommend using StackOverflow or our discord server.