aws / aws-sam-cli

CLI tool to build, test, debug, and deploy Serverless applications using AWS SAM
https://aws.amazon.com/serverless/sam/
Apache License 2.0
6.51k stars 1.17k forks source link

Feature request: make it possible to keep docker container warm #239

Closed jandockx closed 3 years ago

jandockx commented 6 years ago

I understand from other issues that a new docker container is started for each request. This makes some experiments or automated tests undoable in practice. SAM Local is much too slow in the context where more then 1 request is to be handled.

I suspect that hot reloading depends on this feature.

I think it would be a good idea to make it possible to choose, while this project evolves further, to forego hot reloading, but to keep the docker container warm.

Something like

sam local start-api -p <PORT> --profile <AWS PROFILE> --keep-it-warm

This would broaden the applicability of sam local enormously.

Thank you for considering this suggestion. This looks like an awesome project.

mhart commented 4 years ago

Support for warm invokes has been pushed to all docker-lambda runtimes ๐ŸŽ‰

The documentation above still stands, ie invoke with:

docker run --rm \
  -e DOCKER_LAMBDA_STAY_OPEN=1 \
  -p 9001:9001 \
  -v $PWD:/var/task \
  lambci/lambda:ruby2.5 \
  lambda_function.lambda_handler

All runtimes also have support for X-Amz-Log-Type: Tail (--log-type Tail if invoking from the aws CLI), as well as X-Amz-Invocation-Type: DryRun (--invocation-type DryRun) and X-Amz-Invocation-Type: Event (--invocation-type Event).

All old images are available at lambci/lambda:20191117-<runtime>, eg lambci/lambda:20191117-dotnetcore2.1 โ€“ in case ppl are encountering issues with the new images. However, I tried hard to ensure that sam invoke local should still function largely as it does today.

mhart commented 4 years ago

So, all that's left now is support for these warm invokes in aws-sam-cli ๐Ÿ˜ธ

ranjan-purbey commented 4 years ago

Our team is just getting started with lambda but one of the initial roadblocks we encountered was the slow response from local API Gateway instance created using aws-sam-cli. On a system with 16GB memory, each invocation takes ~7 seconds. This makes development really painful. Any estimates on how long before the feature is integrated into SAM CLI?

ranjan-purbey commented 4 years ago

@mhart on running the docker container directly using the command you suggested above, the container needs to be restarted after every code change in order to reflect the changes. Any workarounds?

mhart commented 4 years ago

@ranjan-purbey use something like https://facebook.github.io/watchman/ โ€“ย just restart the process whenever one of your files change

mhart commented 4 years ago

Added more documentation here: https://github.com/lambci/docker-lambda#running-in-stay-open-api-mode

mhart commented 4 years ago

@ranjan-purbey I added some documentation for developing and restarting whenever there are changes to your code: https://github.com/lambci/docker-lambda/#developing-in-stay-open-mode

mhart commented 4 years ago

I've actually added a watch mode to docker-lambda itself, instead of needing to rely on external file watchers to do the job for you. Just need to pass in DOCKER_LAMBDA_WATCH=1 to activate.

Update documentation here: https://github.com/lambci/docker-lambda#developing-in-stay-open-mode

You can also manually reload the handler by passing SIGHUP to the container.

jeff-r-koyaltech commented 4 years ago

While waiting on the fix, it's helpful to know that the docker pull command (as @sanathkr mentioned above) can be skipped, and accounts for about half of the waiting.

My personal experience with local api gateway testing is that I can shave about 5 seconds off per request by simply passing --skip-pull-image on launch.

sam local start-api --skip-pull-image

That's a very simple fix you can make to your dev workflow to save some pain for now. Thanks @sanathkr !

literakl commented 4 years ago

@ranjan-purbey I added some documentation for developing and restarting whenever there are changes to your code: https://github.com/lambci/docker-lambda/#developing-in-stay-open-mode

Great work. I do not understand if this is intended for single API method, or complete API? Do I have to start Docker for every API method?

OFranke commented 4 years ago

If sam is using the same docker image under the hood, would it theoretically be possible to just pass the DOCKER_LAMBDA_STAY_OPEN=1 variable via sam environments.json? Right now I observed that for some reason I cannot randomly add any variables to environments.json, just some that I defined before in the template.yaml.

When I hardcode the environment variable in my template.yaml like that:

SrvApigraphqlapi8D508D37:
    Type: AWS::Lambda::Function
    Properties:
      Code: SrvApigraphqlapi8D508D37
      Handler: base.handler
      Role:
        Fn::GetAtt:
        - SrvApigraphqlapiServiceRoleFD44AE9E
        - Arn
      Runtime: nodejs12.x
      Environment:
        Variables:
          DB_HOST:
            Fn::GetAtt:
            - SrvDatabasecdkgraphilelambdaexampledbD17C7F0B
            - Endpoint.Address
          DB_PORT:
            Fn::GetAtt:
            - SrvDatabasecdkgraphilelambdaexampledbD17C7F0B
            - Endpoint.Port
          DB_NAME: postgres
          DB_USERNAME: postgres
          DB_PASSWORD: postgres
          AWS_STAGE: prod
          DOCKER_LAMBDA_STAY_OPEN: 1

The whole thing crashes giving me that error message:

Lambda API listening on port 9001...
Function 'SrvApigraphqlapi8D508D37' timed out after 20 seconds
<class 'samcli.local.apigw.local_apigw_service.LambdaResponseParseException'>
flache commented 4 years ago

Are there any updates or is there a timeline on this? This is the single biggest blocker for us (and I can imagine for many others) to do more with AWS Lambda because this makes it almost impossible to develop and test stuff locally. Even with --skip-pull-image, a delay of ~5 seconds for each request makes it just unusable. Also there is the problem with global context not being preserved.

I understand that features must be prioritized but I am having a hard time to understand that everything that is running on lambda cannot be tested locally is not a high priority issue. Or am I missing something?

literakl commented 4 years ago

I have solved this trouble by moving away from Lambda to Node Express

Dne st 15. 4. 2020 12:50 uลพivatel flache notifications@github.com napsal:

Are there any updates or is there a timeline on this? This is the single biggest blocker for us (and I can imagine for many others) to do more with AWS Lambda because this makes it almost impossible to develop and test stuff locally. Even with --skip-pull-image, a delay of ~5 seconds for each request makes it just unusable. Also there is the problem with global context not being preserved.

I understand that features must be prioritized but I am having a hard time to understand that everything that is running on lambda cannot be tested locally is not a high priority issue. Or am I missing something?

โ€” You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/awslabs/aws-sam-cli/issues/239#issuecomment-613966758, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQWNN5BD2AETHPMBI3KXULRMWGQTANCNFSM4EJNN3UA .

jfuss commented 4 years ago

Update: The team is working on other prioritizes at the moment. We know the time it takes for invoking locally is a pain point for many and we have plans to address it in the future. We do not have an ETA as of now.

OFranke commented 4 years ago

@flache I've moved away from sam as it seems to not play so well with cdk at the moment, see https://github.com/awslabs/aws-sam-cli/issues/1911. I worked around it having an app that I run on docker locally but let cdk deploy it. Therefore I just use different application entries, which are not so different at all.

// lambda entry
import { Response, Request } from 'express';

const awsServerlessExpress = require('aws-serverless-express');
const express = require('express');

const app = express();
const handler = (req: Request, res: Response): void => {
  try {
    app(
      req,
      res,
      (err: { status: number; statusCode: number; message: string }) => {
        if (err) {
          if (!res.headersSent) {
            res.statusCode = err.status || err.statusCode || 500;
            res.setHeader('Content-Type', 'application/json');
          }
          res.end(JSON.stringify({ errors: [{ message: `${err.message}` }] }));
          return;
        }
        if (!res.finished) {
          if (!res.headersSent) {
            res.statusCode = 404;
          }
          res.end(`'${req.url}' not found`);
        }
      },
    );
  } catch (err) {
    res.end(JSON.stringify({ errors: [{ message: `${err.message}` }] }));
  }
};

const server = awsServerlessExpress.createServer(handler, undefined);
exports.handler = (event: unknown, context: unknown): unknown =>
  awsServerlessExpress.proxy(server, event, context);
// docker entry
import express from 'express';

const main = async () => {
  const app = express();

  app.listen(5000, '0.0.0.0');
};

try {
  void main();
} catch (e) {
  console.error('Fatal error occurred starting server!');
  console.error(e);
  process.exit(101);
}

I have built a whole graphql service like that, and run it for a few weeks on AWS now. Seems to be fine.

elthrasher commented 4 years ago

For those who are very comfortable with Docker and docker-compose, I created a proxy image that works with the underlying SAM (lambci) images and can bring your lambda function into existing docker-compose workflows as a long-lived function. https://github.com/elthrasher/http-lambda-invoker

literakl commented 4 years ago

I have personally switched from AWS Lamda to NodeJS+Express+nodemon and my productivity and happiness boosted.

duartemendes commented 4 years ago

Spent the last week writing a CLI tool to help with this issue, just 2 days ago I published the first version.

It's available in npm for download and installation. It provides both DOCKER_LAMBDA_STAY_OPEN and DOCKER_LAMBDA_WATCH environment variables to the underlying containers, mitigating cold starts after the first invocation and watching code changes.

I think the tool is easy to use (takes one command to run your api locally) but it's in a very early stage. It works very well for my APIs but I'm pretty sure I didn't take all use cases into consideration. So, give it a go, report any issues you find and please leave some feedback.

S-Cardenas commented 4 years ago

@duartemendes that tool is amazing! Congratulations and let me know if you need any help.

Does your tool currently support layers?

duartemendes commented 4 years ago

Thanks @S-Cardenas. It doesn't but it's something I'm happy to take a loot at ๐Ÿ‘

kingferiol commented 4 years ago

This is really a road blocker for this technology for us. Too painfully.

It is not sustainable to wait 10 seconds per each request during development. Without any action on this, I think that we have to reconsider our approach to this technology.

jfuss commented 4 years ago

Update: We have prioritized some work that will help with the slow request time and provided a better warm invoke experience. I do not have timelines or ETAs to share at this point but wanted to communicate that we are starting to look at what we can do in this space.

ianballard commented 4 years ago

@jfuss any updates?

guichafy commented 4 years ago

I'm very excited to see this feature.

leonardobork commented 3 years ago

@jfuss any news?

S-Cardenas commented 3 years ago

Ditto. Would be great if this was officially released. Currently using https://github.com/elthrasher/http-lambda-invoker as a substitute.

OGoodness commented 3 years ago

๐Ÿคž Let's hope we can see this soon

S-Cardenas commented 3 years ago

Seems like it's getting very close to being approved and merged. Would love to get a notification when/if it does.

millsy commented 3 years ago

Fingers crossed this is soon added

kaarejoergensen commented 3 years ago

This feature has been added to the newest release (https://github.com/aws/aws-sam-cli/releases/tag/v1.14.0) ๐ŸŽ‰

mndeveci commented 3 years ago

(As @kaarejoergensen mentioned ๐Ÿ˜„ ) Happy to inform that, this has been released with v1.14, resolving the issue.