Closed jandockx closed 3 years ago
Support for warm invokes has been pushed to all docker-lambda runtimes ๐
The documentation above still stands, ie invoke with:
docker run --rm \
-e DOCKER_LAMBDA_STAY_OPEN=1 \
-p 9001:9001 \
-v $PWD:/var/task \
lambci/lambda:ruby2.5 \
lambda_function.lambda_handler
All runtimes also have support for X-Amz-Log-Type: Tail
(--log-type Tail
if invoking from the aws
CLI), as well as X-Amz-Invocation-Type: DryRun
(--invocation-type DryRun
) and X-Amz-Invocation-Type: Event
(--invocation-type Event
).
All old images are available at lambci/lambda:20191117-<runtime>
, eg lambci/lambda:20191117-dotnetcore2.1
โ in case ppl are encountering issues with the new images. However, I tried hard to ensure that sam invoke local should still function largely as it does today.
So, all that's left now is support for these warm invokes in aws-sam-cli
๐ธ
Our team is just getting started with lambda but one of the initial roadblocks we encountered was the slow response from local API Gateway instance created using aws-sam-cli. On a system with 16GB memory, each invocation takes ~7 seconds. This makes development really painful. Any estimates on how long before the feature is integrated into SAM CLI?
@mhart on running the docker container directly using the command you suggested above, the container needs to be restarted after every code change in order to reflect the changes. Any workarounds?
@ranjan-purbey use something like https://facebook.github.io/watchman/ โย just restart the process whenever one of your files change
Added more documentation here: https://github.com/lambci/docker-lambda#running-in-stay-open-api-mode
@ranjan-purbey I added some documentation for developing and restarting whenever there are changes to your code: https://github.com/lambci/docker-lambda/#developing-in-stay-open-mode
I've actually added a watch mode to docker-lambda itself, instead of needing to rely on external file watchers to do the job for you. Just need to pass in DOCKER_LAMBDA_WATCH=1
to activate.
Update documentation here: https://github.com/lambci/docker-lambda#developing-in-stay-open-mode
You can also manually reload the handler by passing SIGHUP
to the container.
While waiting on the fix, it's helpful to know that the docker pull command (as @sanathkr mentioned above) can be skipped, and accounts for about half of the waiting.
My personal experience with local api gateway testing is that I can shave about 5 seconds off per request by simply passing --skip-pull-image
on launch.
sam local start-api --skip-pull-image
That's a very simple fix you can make to your dev workflow to save some pain for now. Thanks @sanathkr !
@ranjan-purbey I added some documentation for developing and restarting whenever there are changes to your code: https://github.com/lambci/docker-lambda/#developing-in-stay-open-mode
Great work. I do not understand if this is intended for single API method, or complete API? Do I have to start Docker for every API method?
If sam
is using the same docker image under the hood, would it theoretically be possible to just pass the DOCKER_LAMBDA_STAY_OPEN=1
variable via sam environments.json
?
Right now I observed that for some reason I cannot randomly add any variables to environments.json
, just some that I defined before in the template.yaml.
When I hardcode the environment variable in my template.yaml like that:
SrvApigraphqlapi8D508D37:
Type: AWS::Lambda::Function
Properties:
Code: SrvApigraphqlapi8D508D37
Handler: base.handler
Role:
Fn::GetAtt:
- SrvApigraphqlapiServiceRoleFD44AE9E
- Arn
Runtime: nodejs12.x
Environment:
Variables:
DB_HOST:
Fn::GetAtt:
- SrvDatabasecdkgraphilelambdaexampledbD17C7F0B
- Endpoint.Address
DB_PORT:
Fn::GetAtt:
- SrvDatabasecdkgraphilelambdaexampledbD17C7F0B
- Endpoint.Port
DB_NAME: postgres
DB_USERNAME: postgres
DB_PASSWORD: postgres
AWS_STAGE: prod
DOCKER_LAMBDA_STAY_OPEN: 1
The whole thing crashes giving me that error message:
Lambda API listening on port 9001...
Function 'SrvApigraphqlapi8D508D37' timed out after 20 seconds
<class 'samcli.local.apigw.local_apigw_service.LambdaResponseParseException'>
Are there any updates or is there a timeline on this? This is the single biggest blocker for us (and I can imagine for many others) to do more with AWS Lambda because this makes it almost impossible to develop and test stuff locally. Even with --skip-pull-image
, a delay of ~5 seconds for each request makes it just unusable. Also there is the problem with global context not being preserved.
I understand that features must be prioritized but I am having a hard time to understand that everything that is running on lambda cannot be tested locally is not a high priority issue. Or am I missing something?
I have solved this trouble by moving away from Lambda to Node Express
Dne st 15. 4. 2020 12:50 uลพivatel flache notifications@github.com napsal:
Are there any updates or is there a timeline on this? This is the single biggest blocker for us (and I can imagine for many others) to do more with AWS Lambda because this makes it almost impossible to develop and test stuff locally. Even with --skip-pull-image, a delay of ~5 seconds for each request makes it just unusable. Also there is the problem with global context not being preserved.
I understand that features must be prioritized but I am having a hard time to understand that everything that is running on lambda cannot be tested locally is not a high priority issue. Or am I missing something?
โ You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/awslabs/aws-sam-cli/issues/239#issuecomment-613966758, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQWNN5BD2AETHPMBI3KXULRMWGQTANCNFSM4EJNN3UA .
Update: The team is working on other prioritizes at the moment. We know the time it takes for invoking locally is a pain point for many and we have plans to address it in the future. We do not have an ETA as of now.
@flache I've moved away from sam as it seems to not play so well with cdk at the moment, see https://github.com/awslabs/aws-sam-cli/issues/1911. I worked around it having an app that I run on docker locally but let cdk deploy it. Therefore I just use different application entries, which are not so different at all.
// lambda entry
import { Response, Request } from 'express';
const awsServerlessExpress = require('aws-serverless-express');
const express = require('express');
const app = express();
const handler = (req: Request, res: Response): void => {
try {
app(
req,
res,
(err: { status: number; statusCode: number; message: string }) => {
if (err) {
if (!res.headersSent) {
res.statusCode = err.status || err.statusCode || 500;
res.setHeader('Content-Type', 'application/json');
}
res.end(JSON.stringify({ errors: [{ message: `${err.message}` }] }));
return;
}
if (!res.finished) {
if (!res.headersSent) {
res.statusCode = 404;
}
res.end(`'${req.url}' not found`);
}
},
);
} catch (err) {
res.end(JSON.stringify({ errors: [{ message: `${err.message}` }] }));
}
};
const server = awsServerlessExpress.createServer(handler, undefined);
exports.handler = (event: unknown, context: unknown): unknown =>
awsServerlessExpress.proxy(server, event, context);
// docker entry
import express from 'express';
const main = async () => {
const app = express();
app.listen(5000, '0.0.0.0');
};
try {
void main();
} catch (e) {
console.error('Fatal error occurred starting server!');
console.error(e);
process.exit(101);
}
I have built a whole graphql service like that, and run it for a few weeks on AWS now. Seems to be fine.
For those who are very comfortable with Docker and docker-compose, I created a proxy image that works with the underlying SAM (lambci) images and can bring your lambda function into existing docker-compose workflows as a long-lived function. https://github.com/elthrasher/http-lambda-invoker
I have personally switched from AWS Lamda to NodeJS+Express+nodemon and my productivity and happiness boosted.
Spent the last week writing a CLI tool to help with this issue, just 2 days ago I published the first version.
It's available in npm for download and installation. It provides both DOCKER_LAMBDA_STAY_OPEN
and DOCKER_LAMBDA_WATCH
environment variables to the underlying containers, mitigating cold starts after the first invocation and watching code changes.
I think the tool is easy to use (takes one command to run your api locally) but it's in a very early stage. It works very well for my APIs but I'm pretty sure I didn't take all use cases into consideration. So, give it a go, report any issues you find and please leave some feedback.
@duartemendes that tool is amazing! Congratulations and let me know if you need any help.
Does your tool currently support layers?
Thanks @S-Cardenas. It doesn't but it's something I'm happy to take a loot at ๐
This is really a road blocker for this technology for us. Too painfully.
It is not sustainable to wait 10 seconds per each request during development. Without any action on this, I think that we have to reconsider our approach to this technology.
Update: We have prioritized some work that will help with the slow request time and provided a better warm invoke experience. I do not have timelines or ETAs to share at this point but wanted to communicate that we are starting to look at what we can do in this space.
@jfuss any updates?
I'm very excited to see this feature.
@jfuss any news?
Ditto. Would be great if this was officially released. Currently using https://github.com/elthrasher/http-lambda-invoker as a substitute.
๐ค Let's hope we can see this soon
Seems like it's getting very close to being approved and merged. Would love to get a notification when/if it does.
Fingers crossed this is soon added
This feature has been added to the newest release (https://github.com/aws/aws-sam-cli/releases/tag/v1.14.0) ๐
I understand from other issues that a new docker container is started for each request. This makes some experiments or automated tests undoable in practice. SAM Local is much too slow in the context where more then 1 request is to be handled.
I suspect that hot reloading depends on this feature.
I think it would be a good idea to make it possible to choose, while this project evolves further, to forego hot reloading, but to keep the docker container warm.
Something like
This would broaden the applicability of sam local enormously.
Thank you for considering this suggestion. This looks like an awesome project.