Closed prm-dan closed 2 years ago
Hi! Any updates?
A bit late to answer but just in case someone else runs into this.
I was having the same issue and it's a "problem" (not really a problem, just a limitation) with serverless-offline as it isn't an exact simulation of a lambda environment, so it does not have container reuse.
https://github.com/dherault/serverless-offline/issues/363
Hope it helps!
Turning on useWorkerThreads, I see reuse/on memory cache of my handlers. There seems to be several handlers and each of them seems to be called randomly, initializes only at their first call.
Thank!
I guess I have the same problem like you. I am running NestJS in a Lambda Function. On every request NestJs is initiating itself. The cache is not used.
https://stackoverflow.com/questions/72422079/nestjs-serverless-app-keeps-reinitiating-itself
I also have the issue with a database connection. When I store the client or a boolean flag as a public variable and try to set it - it's not working. The value is always null. But when I use mongoose.connection.readyState to check if the connection is established then I get the correct value back. I don't understand this behaviour.
it also depends on how you are using serverless-offline
as well as which mode (in-process, child process, worker threads).
the current in-process
implementation is a bit faulty and causes memory leaks.
I'm planning on combining the modes as well as switch out the default to worker threads, which is now supported by all supported node.js versions. those don't leak memory and is the best fit for the job.
v9
was released a couple days ago. could you check if that solves your problems? as mentioned above, worker threads are now used by default. if you want the handler to reload for development you can use the --reloadHandler
flag, which creates new lambda instances and discards previous ones.
closing this issue in the meanwhile. please open a new issue if this does not solve your problems.
I'm not sure if this is by designed. I posted this question to Stack Overflow earlier today.
https://stackoverflow.com/questions/61067099/why-does-serverless-offline-re-execute-my-whole-js-file-for-a-handler
Bug Report
I'm running apollo-server-lambda locally in using serverless-offline. Even though the handler is being exported once, the serverless-offline code is fully recreating the ApolloServer for every request (which is causing my knex to create new DB connections and leak them). I'd expect it to keep the same ApolloServer.
Current Behavior
When I make a request to a server running
sls offline start
, the JS that creates the handler gets fully re-executed each time. Each of those requests stays open. If I attach an exit handler, the exit code is not called until I kill the sls offline server.Sample Code
I copied this tutorial. I'm guessing this is the re-create behavior is the default behavior for serverless-offline. Before I setup a full reproduction setup, I'll file the bug and see if this is a common issue. https://medium.com/@gannochenko/how-to-use-graphql-apollo-server-with-serverless-606430ad94b3
Expected behavior/code I'd expect the code to not recreate the ApolloServer each request.
Environment
serverless
version: v1.67.0serverless-offline
version: v5.12.1node.js
version: v13.11.0OS
: macOS 10.15.2Possible Solution I assumed there would be a way to share js across lambda invocations. If not, the exit conditions would make sense (e.g. right after the call happens).