Open geekzhanglei opened 2 months ago
Same issue, When I deploy the service, the initial memory consumption starts around 300MB. However, after a few days of traffic, the memory usage gradually increases, eventually exceeding 1GB.
Can someone create a heap snapshot of their app and share it with us?
That will help to triage the issue
Can someone create a heap snapshot of their app and share it with us?
That will help to triage the issue
here is my code,even with client-side rendering only, server memory can still increase.
here is my debug
This is the corresponding snapshot and data Heap-20240918T113529.heapsnapshot.zip Heap-20240918T113540.heapsnapshot.zip Heap-20240918T113547.heapsnapshot.zip Heap-20240918T113552.heapsnapshot.zip Heap-20240918T113557.heapsnapshot.zip
The last file is the result of running autocannon with 4k requests
@ematipico thanks
@geekzhanglei
After studying the heap snapshot, I concluded that the issue isn't Astro, in your case, however I believe it's normal having an increased size in memory because the pages are lazily loaded when requested.
The SSR code looks like this:
const _page0 = () => import('./pages/_image.astro.mjs');
const _page1 = () => import('./pages/post/post-b.astro.mjs');
const _page2 = () => import('./pages/post/post-body-used.astro.mjs');
const _page3 = () => import('./pages/post.astro.mjs');
const _page4 = () => import('./pages/_slug_/title.astro.mjs');
const _page5 = () => import('./pages/index.astro.mjs');
This means that when your server is deployed, it uses the minimum memory to bootstrap the server, then each time a page i rendered, Node.js caches the modules that are requested to render a page, and they will be reused the next time there's a hit.
First snapshot, adapter memory
Last snapshot, adapter memory
Now, regarding the memory growing, I noticed there are a lot of unhandledRejection
errors the pile up during the rendering of some pages (don't know which ones). Some of them are thrown during the execution of the middleware, some of them when resolving props.
Any idea what's causing the issue?
Now, regarding the memory growing, I noticed there are a lot of
unhandledRejection
errors the pile up during the rendering of some pages (don't know which ones). Some of them are thrown during the execution of the middleware, some of them when resolving props.Any idea what's causing the issue?
I took another look, and currently, there aren't any significant leads.
Firstly, my service routing is similar to https://domain/picture/123456, and the directory structure under the "pages" directory is as follows.
The lazy loading of pages is not something I have set up; it is automatically generated by the framework. Does this mean that URLs like https://domain/picture/123456 will inevitably cause an increase in memory usage?
Secondly, I have already disabled asynchronous requests in my code and am using mock data locally. It seems that these unhandled rejections are not from my business code.
Thank you for your attention. I will continue to investigate further.
After further investigation on my end, the issue only occurs when using Bun with the Node adapter. I switched to use this Bun adapter : https://github.com/nurodev/astro-bun.
I didn't dig too deep, but I reproduced it locally with the provided StackBlitz example. In that example, possibly for its simplicity, the memory increases very slowly, but it does increase as requests are received.
My steps were:
node --inspect dist/server/entry.mjs
Comparing snapshots 3 and 1 shows the initialization of the rendered component and everything needed for it to work. Looking at timeline 2, we can see a large allocation in the beginning that is not deallocated:
So far, everything is as expected.
Comparing snapshots 5 and 3, I'd expect no significant change, but around 300KB of extra memory was allocated and not freed. Looking at timeline 4, we see a constant rate of allocations (expected as requests come at a constant rate), and most of the allocations get deallocated within the time frame. But some of them are not being deallocated:
That is even with me forcing the garbage collection before stopping the timeline recording.
Here are the recordings if anyone else wants to analyze them: heap-investigation.tar.gz
If someone has a project where this is happening at a faster rate and can reproduce my steps above it might help us by distinguishing the problem from noise (like Node optimizing and de-optimizing things).
After further investigation on my end, the issue only occurs when using Bun with the Node adapter. I switched to use this Bun adapter : https://github.com/nurodev/astro-bun.
I didn't use bun, I only used the Astro Node adapter. In the production environment, I actually use Koa to fetch data and call the middleware generated by Astro.
... Here are the recordings if anyone else wants to analyze them: heap-investigation.tar.gz
If someone has a project where this is happening at a faster rate and can reproduce my steps above it might help us by distinguishing the problem from noise (like Node optimizing and de-optimizing things).
200 requests leave 300k of unreleased memory, while 100,000 requests will result in 150MB of memory usage. This seems to be abnormal. In my production environment, I have around 100,000 visits, and the memory increases by approximately 500MB per day. Currently, I am resolving this issue by periodically restarting the service.
@geekzhanglei are you able to provide us with a heapsnapshot of your service?
200 requests leave 300k of unreleased memory, while 100,000 requests will result in 150MB of memory usage. This seems to be abnormal.
That is not what I measured. My experiment had 200 clients, sending over 1000 requests per second for 20 seconds, which is over 20,000 requests.
200 requests leave 300k of unreleased memory, while 100,000 requests will result in 150MB of memory usage. This seems to be abnormal.
That is not what I measured. My experiment had 200 clients, sending over 1000 requests per second for 20 seconds, which is over 20,000 requests.
Alright, I understand. I’ll think about it some more.
@geekzhanglei are you able to provide us with a heapsnapshot of your service?
[Uploading Heap-before.heapsnapshot.zip…]() [Uploading Heap-after.heapsnapshot.zip…]()
@ematipico
Astro Info
If this issue only occurs in one browser, which browser is a problem?
No response
Describe the Bug
When using Astro’s SSR, I noticed that enabling client:load or client:idle causes my memory usage to continuously increase. Below is a memory monitoring graph from my production environment. My app is quite simple, here’s how I’ve written pages/my.astro:
In Theater.vue, I’m using nanostores for state management, and nothing else out of the ordinary. When I run load testing using autocannon, I can observe that memory usage goes up and doesn’t drop back to the initial levels after the load ends.
Has anyone else encountered this?
Additionally,When I analyze the memory using Chrome DevTools, I noticed that the JavaScript heap snapshot shows a gradual increase in memory usage even when the entire code is rendered on the client-side, as shown in the heap snapshots after two requests.
When I use Autocannon for the requests, there is a significant increase in memory after a large number of requests.
What's the expected result?
Memory is fully reclaimed after each request, and memory usage does not increase.
Link to Minimal Reproducible Example
https://stackblitz.com/edit/github-iew1od?file=src%2Fpages%2Findex.astro
Participation