Closed zubairzahoor closed 3 years ago
Increasing the MemoryLimit of the Bucket Deployment Lambda solved this. The error should be actionable in some way. It is hard to establish a relation between bucket deployment lambda's memory limit and adding a cloudfront function as such.
// Deploy site contents to S3 bucket
new BucketDeployment(this, "S3SiteContent", {
sources: [Source.asset("./out/xyz")],
destinationBucket: siteBucket,
distribution,
distributionPaths: ["/*"],
**memoryLimit: 512**
});
}
Thanks for the bug report.
I'm a bit perplexed as to the stated cause. Per the documentation for memoryLimit
, "If you are deploying large files, you will need to increase this number accordingly.". This refers to the size of the sources
however, not the distribution
. The only interaction the BucketDeployment
has with the distribution
-- as far as I can tell -- is to create an invalidation request after deployment. The configuration of the Distribution itself should not impact the memory requirements of the BucketDeployment.
Was the source of the Function
part of the source assets, by any chance (or required changes to files in that directory)? If so, I can see how adding more sources triggered the memory limit to be exceeded.
By the time the you receive the error message mentioned (Received response status [FAILED] from custom resource. Message returned...
), the CDK has very little control over the error message format and style. This is all based on internal CloudFormation logic for custom resources. Any suggestions for other ways we can highlight that reasons -- or make the pointer toward memoryLimit
more obvious are certainly welcome.
This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.
I also had this same issue, which caused deployment of a stack to fail.
Received response status [FAILED] from custom resource. Message returned:
Command '['/opt/awscli/aws', 's3', 'sync', '--delete', '/tmp/tmpxnm8nx4y/contents', 's3://bucketname/']' died with <Signals.SIGKILL: 9>. (RequestId: 0bce5a38-15e0-4c46-98c5-106fafcdc1fd)
I increased the memory limit of the BucketDeployment
to 512MB like @zubairzahoor and after that deployment succeeded. The total size of the assets in the sources was around 98MB.
@njlynch how about reopening this one?
Bucket deployment is failing when trying to add cloudfront function that adds security headers to the response with this error:
Received response status [FAILED] from custom resource. Message returned: Command '['/opt/awscli/aws', 's3', 'sync', '--delete', '/tmp/tmp942wfkpo/contents', 's3://dev.safeme.io/']' died with <Signals.SIGKILL: 9>. (RequestId: 11e553f6-f698-460b-91d7-8c89fa66f544)
When I tried to add a simpler version of the function that just returns the response as like this:
THe deployment passes and sometimes after adding headers in the next deployment, it even deploys the function code that I want to deploy. But it doesn't work directly.
Moreover FromFile also does not work at all for me. Sometimes on checking cloudfront function in console on deployment, it even deploys it but eventually the deployment fails and the update is rolled back.
Environment
Other
Apparently, the BucketDeployment Lambda times out (checked Logs) and there is no new file added, just a few lines to deploy and intergrate cloudfront function
This is :bug: Bug Report