aws / aws-cdk

The AWS Cloud Development Kit is a framework for defining cloud infrastructure in code
https://aws.amazon.com/cdk
Apache License 2.0
11.7k stars 3.93k forks source link

(aws-service-catalog): SIGKILL: 9 CustomCDKBucketDeployment Lambda runs out of memory #29862

Closed schnipseljagd closed 6 months ago

schnipseljagd commented 7 months ago

Describe the bug

Received response status [FAILED] from custom resource. Message returned: Command '['/opt/awscli/aws', 's3', 'cp', 's3://cdk-accel-assets-XXXX-eu-central-1/d91fe0e02571a1639b83572917ebc528352265bd305ff570f75cae8419dae522.zip', '/tmp/tmpd41vn6fr/contents']' died with <Signals.SIGKILL: 9>. (RequestId: 90349d17-82a5-4ed1-b285-aad7469cb046)

Expected Behavior

I should be able to configure the memory limit for the bucket deployment Lambda used by the ProductStackSynthesizer.

Current Behavior

It is not configurable since it is hard-coded deep in the synthesizer: https://github.com/aws/aws-cdk/blob/8e2cbae3b479efe76d601c343be0ae536e3e1805/packages/aws-cdk-lib/aws-servicecatalog/lib/private/product-stack-synthesizer.ts#L85

Reproduction Steps

You need to try to deploy a big enough CDK Stack using service catalog product stacks.

Possible Solution

No response

Additional Information/Context

No response

CDK CLI Version

2.134.0 (build 265d769)

Framework Version

No response

Node.js Version

v18.20.0

OS

Linux

Language

TypeScript

Language Version

No response

Other information

No response

schnipseljagd commented 7 months ago

Btw. Increasing the memory limit of the Lambda manually fixes the problem for the time being.

nmussy commented 7 months ago

I don't think increasing the memory allocated to the Lambda is the correct solution here. It's a waste of resources and would not scale with larger file sizes. The file downloaded from S3 should not be fully stored in memory until its download is completed, it should be progressively saved to disk and assembled.

I don't think this is possible with the AWS CLI, but it might be doable with boto3, see Multipart transfers. If not, it could be done with the SDK, see Upload or download large files to and from Amazon S3 using an AWS SDK

EDIT: boto3 should be good to go, and a lot simpler than re-implementing the same behavior with the SDK, see docs

multipart_threshold – The transfer size threshold for which multipart uploads, downloads, and copies will automatically be triggered.

pahud commented 7 months ago

@nmussy Thank you and I agree with you.

But it makes sense to expose the memoryLimit as well. Feel free and welcome to submit a PR to expose the memoryLimit for this.

Please help us prioritize with 👍 .

nmussy commented 7 months ago

I've opened another issue for my solution, so we can keep track of it after this one is fixed by the memoryLimit prop: #29898

github-actions[bot] commented 6 months ago

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.

aws-cdk-automation commented 4 months ago

Comments on closed issues and PRs are hard for our team to see. If you need help, please open a new issue that references this one.