Open ath88 opened 5 years ago
@ath88 Thanks for posting! We'll take a look as soon as possible.
In the mean time, there are a few ways you can help speed things along:
Please remember: never post in a public forum if you believe you've found a genuine security vulnerability. Instead, disclose it responsibly.
For help with questions about Sails, click here.
Hey, @ath88!
Memory leaks are no joke, so we appreciate the community helping us keep an eye out for them. The standard procedure for identifying and reporting memory leaks is as follows:
Check that the app you're testing on uses the recommended production settings, as described in the deployment and scaling sections of the docs.
Review your app-level code, and try to narrow down the leak to a single endpoint. It isn't unusual for Node apps to end up with memory leaks resulting from unhandled errors.
If you're unable to locate the specific endpoint or bit of app code responsible for your increasing memory usage, try replicating the memory leak in a fresh Sails app with no bells or whistles and using the production settings recommended in the deployment docs. If you are able to replicate the memory leak in the fresh Sails app, please create an example repo reproducing the problem. Include standard version info and replication instructions.
Hello madisonhicks, and thanks for the reply. :)
We are not running tests with NODE_ENV=production. If sails leaks memory during tests, I would still consider that an error - no? That said, I just tried running the sample I supplied in my original post with NODE_ENV=production
, and the process maintains the same growing memory footprint. The rest of the production configurations are mostly related to configuration for connectivity.
There are literally only 16 lines of app-level code in my sample (unless you mean something else), so it would be difficult for me to narrow it down further, without looking into the Sails source.
See original post, run it with NODE_ENV=production.
There is a curious piece of console.log when running it with NODE_ENV=production, warning me that "connect.session() MemoryStore is not designed for a production environment, as it will leak memory, and will not scale past a single process
". My initial assumption is that it will not as much leak memory, but rather it won't invalidate sessions, which will stay in memory for the duration of the process. In the example posted, sessions are not being made, so my assumption is that this isn't the culprit.
@ath88 Thanks for all the detailed info and examples so far. If you don't mind, can you make us a repro of this with a new sails app with minimal content to analyze & further understand these OOM errors? We're curious to see greater detail on our end.
@johnabrams7 What do you mean by 'a repro in a new sails app'? :)
@ath88 I mean a repo that reproduces this issue in a new sails app. š
So - you want a GitHub repo with just that one script? I can do that.
Yes, that should work. Essentially a minimal app that has the test you're to see this so we can try it out ourselves.
Here you go.
https://github.com/ath88/sails-oom-test
Just do npm i
and npm start
.
Awesome, many thanks!
@ath88 I was able to reproduce the memory leak on our end and will have this further investigated with the team for what's causing it along with a resolution. Will get back with you with further updates. š
@johnabrams7 would you try again using manual gc() or let it run more times with the hopes of reproducing the out of memory error?
@mikermcneil For sure, I was able to reproduce the memory error twice by letting it run until the crash. This occurs for both the original repo and the PR (screens from the PR):
Last few memory usage logs before the crash.
I verified that lifting and lowering Sails programmatically 50,000 times causes the process to run out of memory.
For the record - i just inserted a manual gc()
, for every lift and lower, using --expose_gc
, and the same pattern shows.
@ath88 Thanks for the follow-up, was that with Mike's PR changes or the original version?
@johnabrams7 With both, now.
@ath88 At the moment, I've documented this in our development plans to improve memory handling for pragmatically lifting and lowering Sails (e.g. stress testing the process memory handling and similar). We welcome additional input and experiences for related applications and scenarios that programmatically lift and lower Sails in a similar fashion.
My personal use case for lifting and lowering sails in a similar fashion, is for the setup and teardown of test. Each test should have a predictable state, unrelated to any previous tests, and therefore a clean sails application is preferred. I hope that makes sense to you. I can't think of other use cases right now.
@ath88 Makes sense, a perfectly valid use case. More advanced & powerful memory handling could ultimately help out with predictability and stability later down the line with more advancements in the framework. š
Hello!
Long story short, but my place of work is upgrading a sails application from 0.12 to 1.4 (old, I know). We are encountering the exact issue @ath88 described in this thread, summarized here https://github.com/balderdashy/sails/issues/6804#issuecomment-513448842.
Have there been any updates/developments or workaround discovered to address this issue since 2019?
Thanks in advance!
In repo that seeks to reproduce this, a couple of things (archived so i'll comment here):
sails.lift()
doesn't support await sails.lower()
doesn't support awaitfor
loop here doesn't actually wait for anything, so it doesn't lift the apps in series like it seems like it would-- it actually lifts them all in parallelso given that, makes sense that 500,000 Sails instances running at once is going to overwhelm RAM
Side note: That said: these programmatic methods should absolutely support
await
- would be happy to merge a PR to Sails that implements that using https://npmjs.com/package/parley
If anyone gets a chance to update @ath88's proof to make the lifting/lowering run in parallel, that would make this simulation give Scratch that! @eashaw just noticed that apparently I already updated @ath88's repo at some point to do just this. @eashaw or I will take a peek at that and report back.
I think I remember back when I opened the issue, that I found something in sails that messes with require.cache
. I know that probably isn't much to go on, but I thought I might as well mention it.
I think I remember back when I opened the issue, that I found something in sails that messes with require.cache. I know that probably isn't much to go on, but I thought I might as well mention it.
@ath88 this happens in the include-all
module, which is called at least once during sails startup.
@mikermcneil you've given some background to why this happens at https://github.com/balderdashy/sails/issues/4316#issuecomment-376001225; is there currently an option to disable it?
I think I remember back when I opened the issue, that I found something in sails that messes with
require.cache
.
@ath88 Great pointer, thanks - this looks like it's spot on, at least for most of the memory leak.
I've adapted the examples above at https://github.com/alxndrsn/sails-memory-leak to demonstrate the memory leak, and test the effects of disabling include-all
's cache deletion for all or some of it's calls.
Results I'm getting:
include-all deletes require.cache ? |
heap change after 500 iterations |
---|---|
always | 633.65 MB |
never | 1.28 MB |
only when loading /config |
633.60 MB |
This require.cache
-clearing seems like surprising behaviour from sails, given that node's docs say:
Modules are cached after the first time they are loaded. This means (among other things) that every call to
require('foo')
will get exactly the same object returned, if it would resolve to the same file. This is an important feature.
Apart from the memory leak, it also means that any expectation you had that your source files will be parsed/run exactly once may be violated.
I stumbled upon that very problem when using mongoose, since it utilizes a singleton pattern stored in the modules scope. We did some database interactions before starting sails, which resulted in two connections, instead of just one.
Node version: v10.16.0 Sails version (sails): 1.2.3 ORM hook version (sails-hook-orm): Not using this. Sockets hook version (sails-hook-sockets): Not using this. Organics hook version (sails-hook-organics): Not using this. Grunt hook version (sails-hook-grunt): Not using this. Uploads hook version (sails-hook-uploads): Not using this. DB adapter & version (e.g. sails-mysql@5.55.5): Not using this. Skipper adapter & version (e.g. skipper-s3@5.55.5): Not using this.
I've looked through all issued related to memory leaks, but none of them describes my case.
When testing a sails application, we 'lift' and 'lower' for each test. We have an extensive test suite, and we now experience OOM errors when running tests. We use the supported constructor as mentioned here: https://sailsjs.com/documentation/concepts/programmatic-usage
I managed to make a tiny script that clearly shows the increasing memory footprint.
It takes awhile, but eventually it runs out of memory. Some memory is clearly being freed by the garbage collector, but some is never freed. Output from my console is:
As far as I can see from snapshots with Chrome DevTools,
async
is getting re-required and never removed from memory.If would be awesome to eliminate this memory leak, so we can continue our testing practices, with the supported 'programmatic usage'.