Open joeduffy opened 6 years ago
/cc @lukehoban I would assume we actually want to make this work somehow, perhaps by using S3 for the upload rather than streaming the zipfile directly. I've changed the title. If I'm mistaken, please let me know. I'm also moving to the AWS repo.
This has not come again in nearly 2 years - and users can of course move to using S3 directly if they want - closing for now.
Hello. I'm currently getting similar error when trying to deploy my stack
error: 1 error occurred:
* updating urn:pulumi:dev::aok-multi-pulumi::aws:lambda/function:Function::dev-v1-internal-reportProblem: 1 error occurred:
* error modifying Lambda Function (dev-v1-internal-reportProblem-98cdf56) Code: RequestEntityTooLargeException:
status code: 413, request id: c5d75820-7a09-4859-a18a-991ac87c88b5
The weird thing about this error that it goes away if I run pulumi up
again in few minutes. But then it would appear again when we try to deploy some other changes to that stack.
Lambda package size (after the latest deployment) is 33.0 MB.
Also, @lukehoban I have a question regarding
and users can of course move to using S3 directly if they want
Do you mean there is a setting in pulumi that enables uploading lambdas to s3? I that a documented feature?
Thanks in advance
UPDATE:
The weird thing about this error that it goes away if I run pulumi up again in few minutes. But then it would appear again when we try to deploy some other changes to that stack.
Actually the lambda is not updated on the AWS, despite pulumi up
not returning any errors.
This has not come again in nearly 2 years - and users can of course move to using S3 directly if they want - closing for now.
@lukehoban I ran into this issue using APIGateway which uses the lambda function serialization aws.lambda.CallbackFunction
can I use s3 in this instance?
I ran into this issue using APIGateway which uses the lambda function serialization
aws.lambda.CallbackFunction
can I use s3 in this instance?
Good point - it is not currently possible to use an S3 bucket with aws.lambda.CallbackFunction
. I'll reopen this issue to track adding that support.
When trying to use an s3Bucket parameter with aws.lambda.CallbackFunction
I'm met with this error
error: aws:lambda/function:Function resource 'callback-function-lambda' has a problem: Conflicting configuration arguments: "filename": conflicts with s3_bucket. Examine values at 'Function.Filename'.
If anyone runs across this issue and is trying to find a solution. A current workaround is to invoke an intermediate aws.lambda.Function
(in this case a container Lambda function) inside your aws.lambda.CallbackFunction
const json = JSON.stringify({
bucketId: bucket.get()
});
const lambdaResponse = await lambda.invoke({
FunctionName: containerLambdaFunction.get(),
Payload: json
}).promise();
I pass in ID's of resources that I want to use in the container so that I have access to them through the aws-sdk
. You can create a container image the same way shown here. This way you can move all the heavy dependencies or large amounts of code to the container while still keeping the functionality of the aws.lambda.CallbackFunction
.
I took a look at this, and the following patch provides an almost solution - but it is blocked by the inability to compute a SAH256 hash of the archive to pass as sourceCodeHash
to aws.lambda.Function
. Without this, the function cannot be updated - changing the source code will replace the contents of the S3 Bucket Object, but will not cause the Lambda to be redeployed from the new contents. I've opened https://github.com/pulumi/pulumi/issues/11738 to provide a way in the Pulumi SDK to enable this.
diff --git a/sdk/nodejs/lambda/lambdaMixins.ts b/sdk/nodejs/lambda/lambdaMixins.ts
index 7253e8817b..250a7f4fb1 100644
--- a/sdk/nodejs/lambda/lambdaMixins.ts
+++ b/sdk/nodejs/lambda/lambdaMixins.ts
@@ -16,6 +16,7 @@ import * as pulumi from "@pulumi/pulumi";
import * as arn from "../arn";
import * as iam from "../iam";
+import * as s3 from "../s3";
import * as utils from "../utils";
import { Function as LambdaFunction, FunctionArgs } from "./function";
@@ -167,6 +168,13 @@ export type BaseCallbackFunctionArgs = utils.Overwrite<FunctionArgs, {
*/
handler?: never;
+ /**
+ * Not allowed when creating an aws.serverless.Function. The [code] will be generated from the
+ * passed in JavaScript callback, and can be placed into a Bucket Object, but not a specific object
+ * version.
+ */
+ s3ObjectVersion?: never;
+
/**
* A pre-created role to use for the Function. If not provided, [policies] will be used.
*/
@@ -344,7 +352,7 @@ export class CallbackFunction<E, R> extends LambdaFunction {
// lambda options without us having to know about it.
// The default version for Lambda functions now is NodeJS16. As of April 30 2022, Node12
// is EOL/ nodeJS 14 is only in maintenance mode so it's best to upgrade to Node16
- const functionArgs = {
+ const functionArgs: FunctionArgs = {
...args,
code: code,
handler: serializedFileNameNoExtension + "." + handlerName,
@@ -353,6 +361,25 @@ export class CallbackFunction<E, R> extends LambdaFunction {
timeout: args.timeout === undefined ? 180 : args.timeout,
};
+ if (args.s3Bucket || args.s3Key) {
+ if (args.s3Bucket === undefined || args.s3Key === undefined) {
+ throw new Error("Both `s3Bucket` and `s3Key` must be provided if either is provided.");
+ }
+
+ const obj = new s3.BucketObject(name, {
+ bucket: args.s3Bucket,
+ key: args.s3Key,
+ source: code,
+ }, opts);
+
+ // Remove code from the functionArgs so we use the S3 bucket args instead of the code.
+ functionArgs.code = undefined;
+
+ // TODO: Set sourceCodeHash to ensure that the function is recreated when the
+ // archive changes.
+ // functionArgs.sourceCodeHash = ...
+ }
+
// If there is no custom Runtime argument being passed to the user
// then we should add "runtime" to the ignoreChanges of the CustomResourceOptions
// This is because as of 12/16/19, we upgraded the default NodeJS version from 8.x to 12.x as 12.x is latest LTS
@flostadler FYI
From @chrsmith on November 15, 2017 19:29
Running
pulumi preview
doesn't detect if a lambda to be deployed is too large for AWS, which leads to a catastrophic failure when you try to deploy your program. Perhaps we could issue a warning/error if we detect this during preview?Copied from original issue: pulumi/pulumi#573