Open heypiotr opened 6 days ago
Hey @heypiotr, thanks for opening the issue and repro repo. I'll take a look at it in the next few days and try to find the culprit here.
Discussed with @dsherret yesterday and we believe there's a codepath for dynamic imports that never frees memory if these imports are not actually loaded into V8, which would explain the increased memory consumption.
I will keep investigating this next week, hopefully have a fix landed for Deno v2.1.
Version: Deno 2.0.3 and 1.30.0
It looks like building a dep_analysis_cache doesn't release memory used when building the cache.
Reproducible example: https://github.com/heypiotr/deno-dep-analysis-issue
On the first run, without dep_analysis_cache:
On the second run, with the dep cache already in place, memory usage is about 5x lower:
Context: We're using Deno for static site generation at Framer. An entrypoint to a Framer site is essentially a file with a lot of dynamic imports, one for each route on the site. We recently noticed that generating a very large site (much larger than the repro) ends up OOMing, and tracked it down to this. It's amplified by the fact that we run multiple Deno processes at the same time to parallelize generation on a multi-core machine, and they were all building the cache simultaneously. To work around this issue, we now spawn a single Deno process which imports the entrypoint and builds the cache, then we exit that process to free up memory, and then we spawn the actual generation processes.
Separately, the process of analyzing the dep graph for that large site takes ~ 30 seconds on a Lambda where we're doing this, so I'm wondering if it'd at all be possible to have an opt-out, at least for dynamic imports? I understand that it could also help with reports like these: https://github.com/denoland/deno/issues/24132, https://github.com/denoland/deno/issues/20945.