parse-community / parse-server

Parse Server for Node.js / Express
https://parseplatform.org
Apache License 2.0
20.93k stars 4.78k forks source link

Cloud Code on AWS lambda #483

Closed flovilmart closed 8 years ago

flovilmart commented 8 years ago

AWS Lambda seems a good candidate to host CloudCode functions. Explore possible intergrations via HTTPS over and custom deployment package.

alexnaldo commented 8 years ago

Will be a great option!!! Integrating Parse Server with Lambda opens many opportunities to integrate it with AWS.

flovilmart commented 8 years ago

I belive the effort is minimal, but I'll try to make an example.

alexnaldo commented 8 years ago

Without Parse.com we will need to think about infrastructure again, a big problem for startups. Using Lambda, API Getaway and others Amazon Services can turn it more friendly and safe.

gfosco commented 8 years ago

This would make a really neat example app... Hope to see someone put it together.

flovilmart commented 8 years ago

When I have time :)

rafapetter commented 8 years ago

Still dont know how to create it but I'll be glad to test it

flovilmart commented 8 years ago

Look at the AWS lambda documentation and use the Parse hooks (that are experimental) to register your URLs

rafapetter commented 8 years ago

Ok, thanks

messagenius-admin commented 8 years ago

Guys, i just read that Lambda now supports node 4.3.2 Is it possible to host the Parse server in Lambda? (I mean the full server, not just the cloud code)

alexnaldo commented 8 years ago

I don't think possible. Lambda is stateless and Parse Server is statefull.

rucas commented 7 years ago

@flovilmart I got Express, the parse-server, and the parse-dashboard on AWS Lambda using the aws-serverless-express and the serverless framework using this tutorial and the example of express with parse-server

flovilmart commented 7 years ago

Ah nice!

@alexnaldo

 Parse Server is statefull.

What do you mean by stateful? Besides the caches that warm up with time, otherwise nothing is stateful.

abdulwahid24 commented 7 years ago

@rucas What about storage ?

rucas commented 7 years ago

@abdulwahid24 I didn't use the parse storage features, but I'm sure theres a way to integrate with AWS S3 or some other cloud storage

abdulwahid24 commented 7 years ago

@rucas But in order to use parse we required to connect database such mongo db. That's what I wanted to know how did you manage that part. Thanks

rucas commented 7 years ago

@abdulwahid24 Just set up a mongolab db or a mongodb on AWS and add the connection string to the parse server config

JacobJT commented 7 years ago

Awesome to know you can throw a full blown parse-server onto lambda!

For me, the powerful part is combining with step functions to schedule tasks. I previously had a few features implemented by running very frequent background jobs to search a queue for tasks that need to be completed and to complete them. A large downside was that our job logs were filled with instances of this job that didn't even do anything because there was nothing to do, and we ended up missing some failures of a couple of our daily background jobs due to corrupt data.

AWS Step Functions charge per state transition, and are relatively cheap, and can trigger lambda functions. I'm beginning to instantiate a Step Function instance from cloud code on the standard AWS EB / EC2 hosted parse-server to add a delay that will then trigger a Lambda function that calls my actual cloud code function.

I have a couple features that require checking the status of data, and either they terminate or schedule another check later on after doing some action. So, I need to figure out if it'd be better to have one big Step Functions with Lambda steps that handles the whole flow (get the cloud code response and terminate / move to next step appropriately), or individual step functions that handle each step with that delay, and the cloud code function will schedule another Step function. Since Lambda charges for server usage, it would save a small amount by not having to wait for the response from cloud code and performing more actions, but I suppose it depends on how Step Functions charges. It's by state transition, and the flow shows a start and end transition, but I'm not 100% those count. If my flow is Start > Delay > lambda function > end, is that two counts for the delay and lambda function steps? Or is it three for each > transition?

mfkenson commented 6 years ago

@rucas i am very interested in what u've done - parse server -> lambda! is there any chance you could share the integrated example? great thanks!

danepowell commented 6 years ago

I'm really excited to try this as well, could we reopen this issue until we have a working public example? And maybe rename it to reflect that we want to run the entire server on lambda...

Also, can anyone comment on how the response time differs between standard hosting and lambda? It seems like lambda would take longer even on a "warm" start...

flovilmart commented 6 years ago

@danepowell the best is probably that you try it out for yourself, and understand the limitations, advantages of running the server in lambdas

danepowell commented 6 years ago

I just got a basic version of Parse running on Lambda following tips provided by @rucas . The only significant problem I've seen so far is that the filesystem is read-only, causing the Parse logger to throw a ton of errors:

Error: EROFS: read-only file system, open '/var/task/logs/parse-server.info.2018-07-23

Is there any way to disable the Parse logger, or switch it from a file logger to console output?

flovilmart commented 6 years ago

I’ll have a look at how to disable the logs

flovilmart commented 6 years ago

You can set logsFolder to ‘null’ to disable file based logging: https://github.com/parse-community/parse-server/blob/master/src/Options/Definitions.js#L66

rucas commented 6 years ago

you can also set logs to /tmp you get 512mb of write space there

https://docs.aws.amazon.com/lambda/latest/dg/limits.html

danepowell commented 6 years ago

Thanks, setting it to /tmp seemed to work! So far, everything seems to be working pretty well on lambda. The only annoying thing I've found is that the logging in general is harder to use... doesn't stream as quickly as Heroku, it's harder to get request info from the API Gateway / router, etc... Maybe using some sort of Winston adapter to send Parse logs directly to CloudWatch would help, but that's a pretty low priority for me.

danepowell commented 6 years ago

@rucas were you ever able to get push notifications working reliably from a Parse Server running on Lambda? That's the last obstacle I haven't been able to overcome.

If I send pushes one at a time, usually they go through. But if I try to send them in batches, most don't get sent. My theory is that because pushes operate on a message queue / publisher / subscriber model, Lambda terminates the process before the queue finishes running.

rucas commented 6 years ago

@danepowell I have not tried out the push notifications 😿. I think it uses the DB for a message queue so you're good there as long as you got a MongoDB setup somewhere and is accessible.

However this sounds like maybe an error where your job takes too long. Have you tried increasing the execution time. I would also check the Cloudwatch logs to see if the logging for the push notifications clips out and ends too early.

Also make sure the client isn't using a websocket connection, I don't think websockets are supported for AWS API Gateway + Lambda

danepowell commented 6 years ago

Thanks for the suggestions. I'm definitely not using websockets, and overall execution time isn't an issue.

It's odd, with verbose logging, if I send pushes individually I see the expected messages that pushes are being prepared and sent. But if I send them in batches, I don't get any log messages at all. I even tried the SQS message service, again no logging at all there. That's what makes me suspect that Parse isn't blocking on push sending / message queuing, so Lambda must just be killing the function.

flovilmart commented 6 years ago

My theory is that because pushes operate on a message queue / publisher / subscriber model, Lambda terminates the process before the queue finishes running.

@danepowell you are totally right, however, you should be able to use a custom message queue instead of the default pushQueue and you can even detach to PushWorker to a separate endpoint.

using push.pushQueue options it's possible to provide a custom message queue. At the other end of the queue you'll receive the push work items.

A quick way to test it, would be to simply use a redis pub/sub as a link between the push and the push worker. The push worker is executed when a message is available on the queue and effectively forwarding the push to the configured adapter.

in a nutshell:

  1. /push is called
  2. PushQueue publishes a message in a ParseMessageQueue, by default an EventEmitter
  3. PushWorker listens for messages emitted by the ParseMessageQUeue
  4. on message, the pushWorker sends the message to the push adapter.

In your case in lambda, maintaining a living push worker may be tricky, but I believe you can run it at the other end of SQS (Simple Queue Service).

There is the SQS adapter that you can use as a drop in message queue replacement:

https://github.com/parse-community/parse-server-sqs-mq-adapter

with:

const ParseServer = require('parse-server').ParseServer; const SQSEventEmitterMQ = require('@parse/sqs-mq-adapter').SQSEventEmitterMQ;

config: {
    ....
  push: {

    ios: {},
    queueOptions: {
      disablePushWorker: true, // disable the push worker here as it will be handed somewhere else
      messageQueueAdapter: SQSEventEmitterMQ,
      queueUrl: 'https://sqs.us-east-1.amazonaws.com/XXX/Parse-Queue', // required
      region: 'us-east-1',
    },
  }
}

At the other end, you'll need the PushWorker to run on messages in a worker lambda.

As you mentioned lamba kills the server, you have to deploy parse server in a worker, here for reference: https://cloudonaut.io/integrate-sqs-and-lambda-serverless-architecture-for-asynchronous-workloads/ (i didn;t check it myself).

import ParsePushAdapter from '@parse/push-adapter';
const PushWorker = require('parse-server').PushWorker;
const SQSEventEmitterMQ = require('@parse/sqs-mq-adapter').SQSEventEmitterMQ;

const adapter = new ParsePushAdapter({
   ios: ...,
   android: ...
});
const subscriberConfig = {
    messageQueueAdapter: SQSEventEmitterMQ,
    queueUrl: 'https://sqs.us-east-1.amazonaws.com/XXX/Parse-Queue', // required
    region: 'us-east-1'
}
const worker = new PushWorker(adapter, subscriberConfig);
// should wait on messages now..

Does it help?

Things you want to double check that the queue is not fanning out, and deleting the message once consumed (no double send)

rucas commented 6 years ago

@danepowell It looks like from @flovilmart comment your best bet would be to use that AWS SQS parse server plugin and then make a lambda worker function triggered on a SQS message event. Let me know how this turns out! This sounds awesome!

flovilmart commented 6 years ago

This is originally how it’s been design to fan out all the adapter api calls, it still puts a toll on the server at it slices up the queries and I believe it can be further optimized.

The last thing would be that lambda kills the execution when the response is flushed, and at that point, it’s no luck again, as the slicing and enqueueing won’t be finished.

This is an area that should still be improved, as it should probably do the slicing on the worker side. And not the server.

flovilmart commented 6 years ago

@danepowell @rucas I have bad news though. The enqueuing is asynchronous and doesn't block the response from getting out. therefore, all push may not be enqueued before the response is emitted and the lambda killed.

However there are great news, improving this behavior opens the door to even higher scalability for parse-server.

I can assist in architecturing the solution, however I am a bit time strapped at the moment, if that's something one of you is interested in. All the components are here it's just a matter of re-organizing them.

danepowell commented 6 years ago

If there's one thing you might have noticed from my pattern of support requests, it's that Node.js sync/async architectures are not my forte 😄 If there was enough guidance I might be able to work on a fix, but I'm guessing it would be a pretty big challenge. I'd love to help if I can though, even if only to test a fix.

flovilmart commented 6 years ago

The fix is quite architectural after all. The goal is to decouple the http endpoint from the expansion (querying all installation) to sending the push to the adapters.

I believe we can move the expansion phase that is done in the push queue to the push worker, after all, the push worker already has a pub sub configuration.

This way, this will hit the queue right away, and we’ll be able to guarantee that the push is being enqueued (not expanded).

Can you look through the code and let me know if what I describe make sense to you? Important files would be PushController, PushQueue, PushWorker.

huksley commented 5 years ago

Hi, is were any updates on this? Serverless are gaining traction and it would be good to see such a comprehensive system as parse-server to be available as AWS Lambda

jcguarinpenaranda commented 5 years ago

Hello, I'm also very interested on this :) serverless seems like a great way to have a Parse Server working

jimcamut commented 5 years ago

@rucas I know this is a really old thread, but if you see this question: did you have any issues with require('parse-server') on Lambda? I can get everything to work on local serverless, but require('parse-server') on an actual Lambda doesn't work for me.

grosscorporation commented 5 years ago

It wont work if you don't push the parse-server module, but are you sure it's the only thing not working because Lambda and parse make no sense to me at least just yet. and remember that evertime you call a lambda you are actually starting a parse-server meaning its stateless even with aws-serverless-express.

jimcamut commented 5 years ago

@GoGross Something is definitely amiss because I'm getting other errors even when I push up the parse-server module. Regarding restarts, a warmup plugin in serverless keeps it quick (in theory), but this has been more of an experiment than a practical solution. I'm about to abandon this approach if I keep hitting dead ends.

grosscorporation commented 5 years ago

by using cold starts plugin you are going against the main reason why you probably chose Lambda, pay for what you use. Cold starts will increase calls to the function, cloudwatch and I am not sure what else

grosscorporation commented 5 years ago

I really recommend back4app for now

jimcamut commented 5 years ago

@GoGross I've already got Parse Server running on in production in AWS (Nodejs, non-lambda). I'd agree that using Parse Server in an environment it was intended for (on AWS or something like back4app) is the best way to go, especially for the built in caching, efficient DB connections, etc. There is definitely still a case for Serverless/lambdas for some specific scenarios, but it's proving to be hacky, and a headache to configure properly. I'm putting my attempt on pause for now.

rucas commented 5 years ago

I know this is a really old thread, but if you see this question: did you have any issues with require('parse-server') on Lambda? I can get everything to work on local serverless, but require('parse-server') on an actual Lambda doesn't work for me.

@jimcamut sorry for the late reply, I'm catching up now. I would export your lambda function from AWS and make sure parse-server is in your node_modules directory.

kevflynn commented 5 years ago

You can set logsFolder to ‘null’ to disable file based logging: https://github.com/parse-community/parse-server/blob/master/src/Options/Definitions.js#L66

This doesnt' seem to work. Running locally, setting to null, it looks like the default value overrides and still sets to ./logs. Trying to get it up and running on App Engine Standard Env, and getting this error

dplewis commented 5 years ago

silent:true, verbose false works for me

kevflynn commented 5 years ago

Thanks for the reply @dplewis

This is my config { cloud:${ROOT}/parse/cloud/index.js, clientId: Config.parse.clientId, databaseURI: Config.parse.dbURI, // Connection string for your MongoDB database masterKey: Config.parse.masterKey, // Keep this key secret! serverURL: Config.parse.serverUrl, // Don't forget to change to https if needed javascriptKey: Config.parse.javascriptKey, // Don't forget to change to https if needed revokeSessionOnPasswordReset: false, appId: Config.parse.appId, verbose: false, logsFolder: null, silent: true }

These are my config settings and every time i start the server locally, i see this get created:

Screen Shot 2019-11-01 at 9 43 25 AM

And if i console log from the winston logger module, i see this when start the server, two separate outputs:

First output: Logs Folder: ./logs/ JSON LOGS: false LOG LEVEL: info VERBOSE: false SILENT: undefined

Then Immediately a follow-up output: Logs Folder: null JSON LOGS: false LOG LEVEL: info VERBOSE: false SILENT: true

I assume, the problem is the first output seems to be called with _defaults and second is with my config.

kevflynn commented 5 years ago

The problem is - that first call on Google App Engine crashes the server because it doesn't have permission to make those file writes

kevflynn commented 5 years ago

So @dplewis - it looks like just importing the parse server module, instantiates configuration with default settings. Any way around this? So I think it's instantiating twice, once on important and once when you instantiate with your own config settings

kevflynn commented 5 years ago

I think I found the problem: This function:

function defaultLogger() { const options = { logsFolder: _defaults.default.logsFolder, jsonLogs: _defaults.default.jsonLogs, verbose: _defaults.default.verbose, silent: _defaults.default.silent }; console.log('RETURNINGING A DEFAULT LOGGER: ', defaultLogger); const adapter = new _WinstonLoggerAdapter.WinstonLoggerAdapter(options); return new _LoggerController.LoggerController(adapter, null, options); }

gets called from within logger.js upon importing the parseserver library, which instantiates the logger before having a chance to set default config.

Update.... Ok - next line down:

let logger = defaultLogger();

instantiates itself globally, so as soon as this file is included, it gets instantiated. Is that necessary?

Taylorsuk commented 4 years ago

@kevflynn is it possible to shed a little more light on your experiences with running Parse Server in Lambda, recommended set-up, issues, performance etc.

I've most concerned about cold boot times - what are you getting on average?