withastro / astro

The web framework for content-driven websites. ⭐️ Star to support our work!
https://astro.build
Other
46.86k stars 2.49k forks source link

Is there a memory leak in Astro's SSR? #11951

Open geekzhanglei opened 2 months ago

geekzhanglei commented 2 months ago

Astro Info

Astro                    v4.15.2
Node                     v20.12.2
System                   macOS (x64)
Package Manager          npm
Output                   server
Adapter                  @astrojs/node
Integrations             @astrojs/vue

If this issue only occurs in one browser, which browser is a problem?

No response

Describe the Bug

When using Astro’s SSR, I noticed that enabling client:load or client:idle causes my memory usage to continuously increase. Below is a memory monitoring graph from my production environment. WX20240909-115206@2x My app is quite simple, here’s how I’ve written pages/my.astro:

In Theater.vue, I’m using nanostores for state management, and nothing else out of the ordinary. When I run load testing using autocannon, I can observe that memory usage goes up and doesn’t drop back to the initial levels after the load ends.

Has anyone else encountered this?

Additionally,When I analyze the memory using Chrome DevTools, I noticed that the JavaScript heap snapshot shows a gradual increase in memory usage even when the entire code is rendered on the client-side, as shown in the heap snapshots after two requests. 对比2次

When I use Autocannon for the requests, there is a significant increase in memory after a large number of requests. autocannon 大量请求

What's the expected result?

Memory is fully reclaimed after each request, and memory usage does not increase.

Link to Minimal Reproducible Example

https://stackblitz.com/edit/github-iew1od?file=src%2Fpages%2Findex.astro

Participation

ShrJamal commented 2 months ago

Same issue, When I deploy the service, the initial memory consumption starts around 300MB. However, after a few days of traffic, the memory usage gradually increases, eventually exceeding 1GB.

ematipico commented 2 months ago

Can someone create a heap snapshot of their app and share it with us?

That will help to triage the issue

geekzhanglei commented 2 months ago

Can someone create a heap snapshot of their app and share it with us?

That will help to triage the issue

here is my code,even with client-side rendering only, server memory can still increase. WX20240918-113706@2x

here is my debug WX20240918-114405@2x WX20240918-113506@2x

This is the corresponding snapshot and data Heap-20240918T113529.heapsnapshot.zip Heap-20240918T113540.heapsnapshot.zip Heap-20240918T113547.heapsnapshot.zip Heap-20240918T113552.heapsnapshot.zip Heap-20240918T113557.heapsnapshot.zip

The last file is the result of running autocannon with 4k requests

@ematipico thanks

ematipico commented 2 months ago

@geekzhanglei

After studying the heap snapshot, I concluded that the issue isn't Astro, in your case, however I believe it's normal having an increased size in memory because the pages are lazily loaded when requested.

The SSR code looks like this:

const _page0 = () => import('./pages/_image.astro.mjs');
const _page1 = () => import('./pages/post/post-b.astro.mjs');
const _page2 = () => import('./pages/post/post-body-used.astro.mjs');
const _page3 = () => import('./pages/post.astro.mjs');
const _page4 = () => import('./pages/_slug_/title.astro.mjs');
const _page5 = () => import('./pages/index.astro.mjs');

This means that when your server is deployed, it uses the minimum memory to bootstrap the server, then each time a page i rendered, Node.js caches the modules that are requested to render a page, and they will be reused the next time there's a hit.

First snapshot, adapter memory

Screenshot 2024-09-18 at 09 55 08

Last snapshot, adapter memory

Screenshot 2024-09-18 at 09 55 36

Now, regarding the memory growing, I noticed there are a lot of unhandledRejection errors the pile up during the rendering of some pages (don't know which ones). Some of them are thrown during the execution of the middleware, some of them when resolving props.

Any idea what's causing the issue?

geekzhanglei commented 2 months ago

Now, regarding the memory growing, I noticed there are a lot of unhandledRejection errors the pile up during the rendering of some pages (don't know which ones). Some of them are thrown during the execution of the middleware, some of them when resolving props.

Any idea what's causing the issue?

I took another look, and currently, there aren't any significant leads.

Firstly, my service routing is similar to https://domain/picture/123456, and the directory structure under the "pages" directory is as follows. WX20240919-164512@2x

The lazy loading of pages is not something I have set up; it is automatically generated by the framework. Does this mean that URLs like https://domain/picture/123456 will inevitably cause an increase in memory usage?

Secondly, I have already disabled asynchronous requests in my code and am using mock data locally. It seems that these unhandled rejections are not from my business code.

Thank you for your attention. I will continue to investigate further.

ShrJamal commented 2 months ago

After further investigation on my end, the issue only occurs when using Bun with the Node adapter. I switched to use this Bun adapter : https://github.com/nurodev/astro-bun.

Fryuni commented 2 months ago

I didn't dig too deep, but I reproduced it locally with the provided StackBlitz example. In that example, possibly for its simplicity, the memory increases very slowly, but it does increase as requests are received.

My steps were:

Comparing snapshots 3 and 1 shows the initialization of the rendered component and everything needed for it to work. Looking at timeline 2, we can see a large allocation in the beginning that is not deallocated: image

So far, everything is as expected.

Comparing snapshots 5 and 3, I'd expect no significant change, but around 300KB of extra memory was allocated and not freed. Looking at timeline 4, we see a constant rate of allocations (expected as requests come at a constant rate), and most of the allocations get deallocated within the time frame. But some of them are not being deallocated: image

That is even with me forcing the garbage collection before stopping the timeline recording.

Here are the recordings if anyone else wants to analyze them: heap-investigation.tar.gz

If someone has a project where this is happening at a faster rate and can reproduce my steps above it might help us by distinguishing the problem from noise (like Node optimizing and de-optimizing things).

geekzhanglei commented 1 month ago

After further investigation on my end, the issue only occurs when using Bun with the Node adapter. I switched to use this Bun adapter : https://github.com/nurodev/astro-bun.

I didn't use bun, I only used the Astro Node adapter. In the production environment, I actually use Koa to fetch data and call the middleware generated by Astro.

geekzhanglei commented 1 month ago

... Here are the recordings if anyone else wants to analyze them: heap-investigation.tar.gz

If someone has a project where this is happening at a faster rate and can reproduce my steps above it might help us by distinguishing the problem from noise (like Node optimizing and de-optimizing things).

200 requests leave 300k of unreleased memory, while 100,000 requests will result in 150MB of memory usage. This seems to be abnormal. In my production environment, I have around 100,000 visits, and the memory increases by approximately 500MB per day. Currently, I am resolving this issue by periodically restarting the service.

ematipico commented 1 month ago

@geekzhanglei are you able to provide us with a heapsnapshot of your service?

Fryuni commented 1 month ago

200 requests leave 300k of unreleased memory, while 100,000 requests will result in 150MB of memory usage. This seems to be abnormal.

That is not what I measured. My experiment had 200 clients, sending over 1000 requests per second for 20 seconds, which is over 20,000 requests.

geekzhanglei commented 1 month ago

200 requests leave 300k of unreleased memory, while 100,000 requests will result in 150MB of memory usage. This seems to be abnormal.

That is not what I measured. My experiment had 200 clients, sending over 1000 requests per second for 20 seconds, which is over 20,000 requests.

Alright, I understand. I’ll think about it some more.

geekzhanglei commented 1 month ago

@geekzhanglei are you able to provide us with a heapsnapshot of your service?

  1. Heap-before.heapsnapshot is a snapshot of a certain state while the service was running.
  2. Heap-after.heapsnapshot is the snapshot taken after visiting a new URL: localhost:4321/picture/817298362.

[Uploading Heap-before.heapsnapshot.zip…]() [Uploading Heap-after.heapsnapshot.zip…]()

@ematipico