aws / aws-cdk

The AWS Cloud Development Kit is a framework for defining cloud infrastructure in code
https://aws.amazon.com/cdk
Apache License 2.0
11.68k stars 3.93k forks source link

Bucket deployment failing when adding cloudfront function #16267

Closed zubairzahoor closed 3 years ago

zubairzahoor commented 3 years ago

Bucket deployment is failing when trying to add cloudfront function that adds security headers to the response with this error: Received response status [FAILED] from custom resource. Message returned: Command '['/opt/awscli/aws', 's3', 'sync', '--delete', '/tmp/tmp942wfkpo/contents', 's3://dev.safeme.io/']' died with <Signals.SIGKILL: 9>. (RequestId: 11e553f6-f698-460b-91d7-8c89fa66f544)

When I tried to add a simpler version of the function that just returns the response as like this:

const cfFunction = new Function(this, "Function", {
            code: FunctionCode.fromInline("function handler(event) { return event.response }")
        });

THe deployment passes and sometimes after adding headers in the next deployment, it even deploys the function code that I want to deploy. But it doesn't work directly.

Moreover FromFile also does not work at all for me. Sometimes on checking cloudfront function in console on deployment, it even deploys it but eventually the deployment fails and the update is rolled back.

Environment

Apparently, the BucketDeployment Lambda times out (checked Logs) and there is no new file added, just a few lines to deploy and intergrate cloudfront function


This is :bug: Bug Report

zubairzahoor commented 3 years ago

Increasing the MemoryLimit of the Bucket Deployment Lambda solved this. The error should be actionable in some way. It is hard to establish a relation between bucket deployment lambda's memory limit and adding a cloudfront function as such.

// Deploy site contents to S3 bucket
        new BucketDeployment(this, "S3SiteContent", {
            sources: [Source.asset("./out/xyz")],
            destinationBucket: siteBucket,
            distribution,
            distributionPaths: ["/*"],
            **memoryLimit: 512**
        });
    }
njlynch commented 3 years ago

Thanks for the bug report.

I'm a bit perplexed as to the stated cause. Per the documentation for memoryLimit, "If you are deploying large files, you will need to increase this number accordingly.". This refers to the size of the sources however, not the distribution. The only interaction the BucketDeployment has with the distribution -- as far as I can tell -- is to create an invalidation request after deployment. The configuration of the Distribution itself should not impact the memory requirements of the BucketDeployment.

Was the source of the Function part of the source assets, by any chance (or required changes to files in that directory)? If so, I can see how adding more sources triggered the memory limit to be exceeded.

By the time the you receive the error message mentioned (Received response status [FAILED] from custom resource. Message returned...), the CDK has very little control over the error message format and style. This is all based on internal CloudFormation logic for custom resources. Any suggestions for other ways we can highlight that reasons -- or make the pointer toward memoryLimit more obvious are certainly welcome.

github-actions[bot] commented 3 years ago

This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.

kareldonk commented 9 months ago

I also had this same issue, which caused deployment of a stack to fail.

Received response status [FAILED] from custom resource. Message returned: 
Command '['/opt/awscli/aws', 's3', 'sync', '--delete', '/tmp/tmpxnm8nx4y/contents', 's3://bucketname/']' died with <Signals.SIGKILL: 9>. (RequestId: 0bce5a38-15e0-4c46-98c5-106fafcdc1fd)

I increased the memory limit of the BucketDeployment to 512MB like @zubairzahoor and after that deployment succeeded. The total size of the assets in the sources was around 98MB.

kareldonk commented 9 months ago

@njlynch how about reopening this one?