Closed spkjp closed 2 years ago
I couldn't reproduce it. The bundle runs without problems. I would appreciate it if you could share the detailed settings.
@hedrall Sorry, my bad. I forgot one important detail - it only happens when using nested stacks.
Here is a small reproduce:
#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from '@aws-cdk/core';
import { CdkStack } from '../lib/cdk-stack';
const app = new cdk.App();
new CdkStack(app, 'CdkStack');
import * as cdk from '@aws-cdk/core';
import { Bundling } from "@aws-cdk/aws-lambda-nodejs/lib/bundling";
import * as synthetics from '@aws-cdk/aws-synthetics';
import * as lambda from '@aws-cdk/aws-lambda';
import { join } from 'path';
export class CdkStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
new NestedStack(this, "NestedStack");
}
}
class NestedStack extends cdk.NestedStack {
constructor(scope: cdk.Construct, id: string) {
super(scope, id);
new synthetics.Canary(this, "Canary", {
runtime: synthetics.Runtime.SYNTHETICS_NODEJS_PUPPETEER_3_0,
schedule: synthetics.Schedule.rate(cdk.Duration.minutes(30)),
test: synthetics.Test.custom({
handler: "index.handler",
code: Bundling.bundle({
depsLockFilePath: join(__dirname, "js"),
entry: join(__dirname, "js", "index.js"),
runtime: lambda.Runtime.NODEJS_12_X,
forceDockerBundling: true,
minify: true,
}),
}),
});
}
}
Doing a cdk deploy --exclusively 'CdkStack'
will fail.
I stumbled into this when using a simple PythonFunction construct (no nested stacks).
This is a massive issue and it's a nasty one!
@miekassu and I have have spent 10+ hours on debugging this issue.
@miekassu has created a repro here: https://github.com/miekassu/lambda-build-fail
To test it:
git clone https://github.com/miekassu/lambda-build-fail.git
cd lambda-build-fail
npm i
cdk deploy -e 'Stage/Stack'
During the deployment you'll not see any errors, but the publishing step may take a minute, because the bundle is large
This part:
[0%] start: Publishing faf451f1ef16a0f6b0ba19f78797d6a4ad88835f5bef11210c7999dd898de1df:current_account-current_region
Then you can try:
du -hs cdk.out
And observe that the directory is unreasonable large.
You can also inspect the S3 bucket and see the asset that was uploaded and it will be ~ 70 MB, I think.
What happens is that not that asset bundling just gets skipped. That wouldn't be too bad.
But in reality, what happens is that, somehow, the asset for each lambda becomes the entire project directory (./
), with all git history (.git
) and node_modules
and all of that gets bundled into a single zip file (> 500 MB) and uploaded to S3.
This, of course fails completely with:
Resource handler returned message: "Unzipped size must be smaller than 262144000 bytes (Service: Lambda, Status Code: 400, Request ID:
6b2ad171-9371-4a88-b2bd-d468ed223082, Extended Request ID: null)" (RequestToken: 7b59e8ca-5b76-ac79-d46a-079e2d97d2d8, HandlerErrorCode
: InvalidRequest)
Another observation. When running cdk synth
it will produce the correct asset paths and hashes. Then running cdk deploy -e 'Stage/Stack'
it will overwrite the asset paths and hashes with incorrect values.
It looks like on that line that @supaiku0 specified (minimatch), the value of Stack.of(this).stackName
is Stage-Stack
, while the pattern
value is Stage/Stack
.
But if cdk deploy -e Stage-Stack
command is used then bundling seems to happen, but deploying isn't.
It just cleanly exits 🤷🏼♂️
❯ npx cdk deploy -e Stage-Stack
Bundling asset Stage/Stack/DummyTSLambda/Code/Stage...
cdk.out/bundling-temp-559a6b1ee287e12c42e2357ee8b6702dd1dc62a1cb205f390bf1377ff93afefb/index.js 646b
⚡ Done in 2ms
❯ echo $?
0
I added more stacks and lambdas to the example repository. Some of my observations while debugging:
Produces not wanted Lambda asset path pointing to the root folder of the project:
cdk diff -e Stage/Stack1
Targeting manifest in cdk.out/assembly
folder produces right asset path when using -e
tag.
To update templates, I used synth commands, which then also bundles other lambdas in the other stacks during synth,
this is unwanted and in large projects takes a lot of time.
cdk synth && cdk -a cdk.out/assembly-Stage diff -e Stage/Stack1
Using -e
tag with synth commands only bundles lambdas from wanted stack, but it also points lambda asset path to project root.
cdk synth -e Stage/Stack1 && cdk -a cdk.out/assembly-Stage diff -e Stage/Stack1
This known issue is preventing the usage of new --hotswap
feature with multi stack projects.
Can confirm this bug still exists with the -e flag. I have a similar setup with multiple stacks. The entry for the NodeJsFunction is the default one, aka
- Derived from the name of the defining file and the construct's id.
If the `NodejsFunction` is defined in `stack.ts` with `my-handler` as id
(`new NodejsFunction(this, 'my-handler')`), the construct will look at `stack.my-handler.ts`
and `stack.my-handler.js`.
My suspicion is, that the cdk deploy -e stackName
internally has a wrong context and therefore adds the root of the project as the asset.
CDK version: 1.123.0
What's the alternative/workaround?
Produces not wanted Lambda asset path pointing to the root folder of the project:
@miekassu what do you mean with asset path pointing to the root folder? Where do you see this?
@jogold in example repo
running cdk synth -e Stage/Stack1
produces cdk.out/assembly-Stage/StageStack130339B27.template
:
"Metadata": {
"aws:cdk:path": "Stage/Stack1/DummyTSLambda/Resource",
"aws:asset:path": "/Users/kasperhamalainen/Code/lambda-build-fail-test",
"aws:asset:property": "Code"
}
and running cdk synth Stage/Stack1
produces:
"Metadata": {
"aws:cdk:path": "Stage/Stack1/DummyTSLambda/Resource",
"aws:asset:path": "../asset.d9f96a6b873fee3f7c5519b781710f977c697119dd706b692fe853068a6cd746",
"aws:asset:property": "Code"
}
Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.
A workaround I found was doing the following within my stack, not exactly thrilled to be doing this but it has helped me avoid more drastic workarounds.
export class NodeLambdaConsumer extends NestedStack {
public readonly lambda: lambda.Function;
constructor(
scope: Construct,
id: string,
{
...
}: NodeLambdaConsumerProps,
) {
super(scope, id, {});
// mutate the BUNDLING_STACKS config in ctx
const config = this.node.getContext(BUNDLING_STACKS);
config.push(this.node.path);
Something I was also considering was just replacing NodejsFunction
with my own construct that takes care of bundling, but for now I'll stick with the above and see how we go 🤷
Asset bundling is currently broken when using the
--exclusively
option during deploy.In our case we have an
aws-lambda-nodejs
with forceDockerBuild set to true (but it shouldn't matter).However, due to the following check the bundling never happens when e.g. running
cdk deploy --exclusively 'MyStack'
- as in it will skip the bundling step completely.Concretely, this minimatch evaluates to
false
for any given stack name, except when using--all
which evaluates to the wildcard*
satisfying anything. https://github.com/aws/aws-cdk/blob/9417b0211eb2939f5a853751333f7941d9dd99f8/packages/%40aws-cdk/core/lib/asset-staging.ts#L164That is because
Stack.of(this).stackName
is not resolved at this point. The screenshot shows the output of running with--all
(bundlingStacks contains[*]
) against the unresolved stack name:Tested on Linux and Windows and CDK 1.88.
This is :bug: Bug Report