Open Bananicorn opened 2 years ago
The memory spikes up, as expected for a stress test, but then settles on the following values: 279mb -> 330mb -> 381mb -> 430mb, and refuses to go down any further
Does it keep growing after ~430mb?
Does it keep growing after ~430mb?
Sorry that was a little unclear - yes, it'll just keep growing until running out of memory;
Of course the example with siege is a bit of an extreme one, because it makes ~1000 requests over the course of 30 seconds, but our staging environment crashes over the course of about an hour, with just the healthcheck calling the homepage every few minute or so. (I don't know if I can share our clients's theme here, but I think the same leak applies to the default themes).
I can confirm I am having the same issue. This causes my local server to crash approx every 15-60 minutes.
Ok. Would you mind trying packages one by one to see which one has the leak (or if it's the core)? Then we can trace back to see the exact point in time when that problem was introduced.
I'm sorry, how would I go about that? the only thing in the packages folder at the moment is the theme itself, but that's probably not what you meant. I can remove packages from frontity.settings.js, but mostly that just results in the page just crashing.
Is there a theme which only uses the core, so I can add the others step by step and see when it starts leaking?
Oh, I guess you can create a minimal theme then. Something like:
const Root = () => (
<div>Empty theme</div>
);
const EmptyTheme = {
name: "empty-theme",
roots: {
theme: Root,
},
};
export default EmptyTheme;
I've got the minimum theme I could observe the leak on here: https://github.com/Bananicorn/frontity-memory-leaking-theme
I only managed to remove @frontity/html2react
, because if I remove @frontity/wp-source
or @frontity/tiny-router
I can't reproduce it anymore.
It's leaking a bit more slowly now, about 2MB per 500 requests.
if I remove @frontity/wp-source or @frontity/tiny-router I can't reproduce it anymore
So the memory leak only happens if both are present?
So the memory leak only happens if both are present?
I can't really tell, since I didn't manage to get it to run without the router - at least not yet.
Okay, I got it to run with @frontity/wp-source and @frontity/tiny-router individually, and I can only really reproduce it when both are present.
It'd be great if we can narrow it down to the code.
Is it only the package presence? Only by using connect? Or when accessing a specific part of the state?
Hello @luisherranz, it's been a while, but I've got some more specific news this time: Okay, so we've figured out our problem to an extent, but don't have a real solution yet. It's the menus fault Since we've stopped fetching our menu in the beforeSSR hook, our Server's been running smoothly, without leaking. Of course now we have the problem that the menu isn't here on page load, which is pretty bad for SEO, and it still blinks into existence, which isn't all too pretty either.
I've put together a small example in the test repo https://github.com/Bananicorn/frontity-memory-leaking-theme, just taking the minimal code from the official examples to fetch a menu from wordpress and display it.
FRONTITY_SOURCE_URL=https://test.frontity.org
npm run inspect
Hello all,
We are having the same memory leak issue after upgrading frontity from 1.12.0 to 1.17.2 . As shown below (12h period), while the frontity application is idle, memory consumption linearly increases and at a point it crashes and reruns.
So far, I have tried to take a few heap snapshots with the production build and tried to compare them. Then investigate the positive deltas, but it did not help clearly because there are numerous meaningless objects listed.
After a while I have tried "Allocation Timeline" tool to see the allocations and GC actions on the go.
As I seek for the changedBeforeMounted property use, it leaded me to a "useMemo" closure allocation in frontity's connect library.
Obviously we can see that if connected Comp changes overtime, useMemo will allocate memory space for a new closure constantly.
My opinion (correct me if I am wrong): GC does not collect these closures because the properties used in the closure are referenced out of the closure. So GC mechanism sees them as actively used closures.
if connected Comp changes overtime
Under what circumstances will the connected Comp component change over time?
Can you trace that down to a specific component to understand what's going on?
Just wanted to confirm that for me the memory leak issue as become worse as well. My local server constantly crashes due to it. I have no idea what i have to do to provide some more info about the problem to help find the cause. LEt me know if I can contribute somehow.
Even though I don't fully understand the problem, if those flags indeed cause a memory leak, these should fix it:
@erdigokce, @DonKoko: would you mind testing it out? _You can just replace this file in your node_modules
folder._
Upon installing frontity locally with:
npx frontity create my-first-frontity-project
,npm run build && npm run serve
It seems as if the node application can't garbage collect all the memory allocated during requests. It happens naturally when the site is called, but can be seen a bit more clearly when testing with siege, with the following command:
timeout 30s siege http://localhost:3000/
The memory spikes up, as expected for a stress test, but then settles on the following values: 279mb -> 330mb -> 381mb -> 430mb, and refuses to go down any further.I thought it was only the case with my theme, since the docker instance it's running in would periodically run out of memory - but upon removing all prefetching for SSR, it stopped, and it also seems to be happening with the default themes.
For better local testing, I've also tried it with this (added to the package.json)
"inspect": "frontity build && node --inspect ./node_modules/frontity/dist/src/cli/index.js serve"
, which enables me to use the nodejs debugger in chromium, but I can't really make sense of the memory dumps there, except that they steadily increase in size.Locally, I'm running:
System:
Binaries:
Browsers:
npmPackages:
npmGlobalPackages:
I've tried searching the forums, but the closest thing I've found was this: https://community.frontity.org/t/npx-frontity-dev-aborted-core-dumped-javascript-heap-out-of-memory/1321/9 But that seems like a slightly different problem.
Thank you for your time - and thank you for Frontity, it's been a joy to develop on so far!