Closed apoorvmote closed 2 years ago
Looks like for some (unknown) reason an empty zip file landed on S3 for this asset.
It should be fixed If you manually remove the asset file from the bootstrap bucket and then retry.
The real issue is docker build with golang is not working. For my usecase golang lambda is still experimental. I am just keeping an eye on it for future use. I am heavily into NodeJS lambdas and they are working fine. So I just deleted single golang function and everything got deployed.
Again the issue is not resolved. But golang lambda is not priority for me so I am closing this thread.
Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.
In my case it was a human error. I had a CDK project with TypeScript where I had a lambda written in JavaScript. In .gitignore
I had an entry that simply excluded javascript files. Once I checked out the repository on another computer the JavaScript file with the lambda was obviously not there. Took me at least two hours to realise that.
This is my typescript code:
const senderLambda = new lambda.Function(params.scope, params.functionName, {
runtime: lambda.Runtime.NODEJS_14_X,
handler: 'sender.handler',
code: lambda.Code.fromAsset(path.join(__dirname, 'email-sender-lambda')),
functionName: params.functionName,
environment: {
EMAIL_FROM: params.emailFrom,
EMAIL_TO: params.emailTo,
EMAIL_BCC: params.emailBcc
}
});
The lambda was meant to be in email-sender-lambda/sender.js
.
Guess it would be good to have a different error/warning message there or simply fail the deployment as soon as the file cannot be found during compilation.
I'm seeing this issue when trying to upgrade to CDK v2.
assets.json file:
{
"version": "16.0.0",
"files": {
"4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659": {
"source": {
"path": "asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659",
"packaging": "zip"
},
"destinations": {
"<redacted>-us-west-2": {
"bucketName": "cdk-hnb659fds-assets-<redacted>-us-west-2",
"objectKey": "4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659.zip",
"region": "us-west-2",
"assumeRoleArn": "arn:${AWS::Partition}:iam::<redacted>:role/cdk-hnb659fds-file-publishing-role-<redacted>-us-west-2"
}
}
},
"0af5e7a7e0c998e4fa0c980dc1158a921cc5b19392ddc8dc5d92a0a5a62155fc": {
"source": {
"path": "ses-validation-stack.template.json",
"packaging": "file"
},
"destinations": {
"<redacted>-us-west-2": {
"bucketName": "cdk-hnb659fds-assets-<redacted>-us-west-2",
"objectKey": "0af5e7a7e0c998e4fa0c980dc1158a921cc5b19392ddc8dc5d92a0a5a62155fc.json",
"region": "us-west-2",
"assumeRoleArn": "arn:${AWS::Partition}:iam::<redacted>:role/cdk-hnb659fds-file-publishing-role-<redacted>-us-west-2"
}
}
}
},
"dockerImages": {}
}
My cdk.out directory:
cdk.out/tree.json
cdk.out/ses-validation-stack.template.json
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/aws-sdk-patch/
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/aws-sdk-patch/opensearch-2021-01-01.service.json
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/aws-sdk-patch/opensearch-2021-01-01.paginators.json
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/index.d.ts
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/index.js.map
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/index.js
cdk.out/manifest.json
cdk.out/cdk.out
cdk.out/ses-validation-stack.assets.json
So the asset files are certainly in the cdk.out
directory and where they are supposed to be according the the asset.json. I'll try to dig some more into how this differs between CDK v1 and v2...
Assets and directory structure are identical (excluding hashes) with CDK v1. Seems to have been an issue introduced in v2? The error output would imply to me that there's something wrong with how the zip path is being given somewhere, but the verbose output didn't give much additional insight on that.
his does appear to be the result of some pathing assumptions. In our setup, we bundle cdk.out and deploy it separately from where it was synthesized. I get this error when doing that, but not if I deploy from the same place I ran synth.
Reopening issue for visibility and tracking
So I originally thought this issue was exclusive to CDK v2, but it turns out that this is not the case. We hadn't seen it prior to the migration because assets were being cached. This issue is blocking us from doing any deployments to new environments.
Based on my testing, the issue was introduced between v1.133.0 and v1.139.0
Is there a way to disable all asset caching? Is the easiest way to handle this to delete the CDK bootstrap stack as well and recreate it? If I deploy with a good version, destroy my stack, then deploy a bad version, it will still succeed. I can only reproduce the issue with a newly bootstrapped account which is a pain in the ass to debug...
This appears to be a result of the --no-staging
flag. I can successfully deploy zip
assets when staging is enabled, but not when it is disabled. This is inconsistent with the behavior of other asset types. For example, docker image tarballs upload just fine when staging is disabled. We have staging disabled so that we don't copy our docker image tarballs unnecessarily to the cdk.out
directory since they can be quite large. The zip
packaging type seems to want to work with --no-staging
, but it doesn't.
So I guess the question is: is this a bug or is this intended behavior with poor error messaging?
For reference, when --no-staging
is passed asset sources inside of assets.json
refer to a non-existent tmp directory like this one: /tmp/jsii-kernel-GqHEA2
This will probably be my last update here. The asset code is abstracted down so many levels it's head spinning to try and understand what it is actually doing and there's little to no documentation on its internals.
Here's what I've gathered: the --no-staging
flag makes it so that assets are not copied to cdk.out
. The aws-lambda-nodejs
package will perform a build on Code
assets and put the build output in a folder inside of /tmp
, aws-lambda-nodejs
expects that these files will be copied into cdk.out
and will delete the directory when it is finished, but when --no-staging
is passed, the assets.json
will still point to this non-existent folder in /tmp
and fail to zip it.
Without any comments from the maintainers, it's hard to say if this behavior is expected or not, whether it should be fixed, or whether to add clearer messaging on the effects. I'd be happy to make any one of these changes, but I need to know that this is something the CDK team is interested in seeing fixed before I dedicate more time to this.
I think I know why this is happening for my specific case. I found this occurring (as in the error showing but then the deploy succeeding) when I was using the code
new s3deploy.BucketDeployment(this, `DeployWithInvalidation1`, {
sources: [s3deploy.Source.asset('../out', { exclude: ['!*\\.*'] })],
destinationBucket: rootSiteBucket,
distribution,
distributionPaths: ['/*'],
prune: false,
});
My explanation for why it's occurring is the \\.
escape characters needed to do globbing on files that don't have .
character because .
is already a special character. It may have something to do with cdk script running without checking for escape characters vs escape characters working correctly during deployment?
I am not using zip files just plain old directory reference to out
dir.
Got the fail: 🚨 WARNING: EMPTY ZIP FILE 🚨
message so heading over here to provide details. This has started out of the blue in the last few days. We build the first time, get this warning, build again and everything works fine. See below the details (worth noting we've experienced this with 3 diff developers all who have very different setups (OS etc))
OS: Linux (WSL2) 5.10.102.1-microsoft-standard-WSL2 x86_64 - Ubuntu 20.04.4 LTS Node: v16.13.0 CDK: 2.18.0 (build 75c90fa) Package Manager: NPM 8.5.5
We're using the @aws-cdk/aws-lambda-go-alpha
construct. With the following bundling options:
this.lambda = new goLambda.GoFunction(this, id, {
entry: props.entry,
environment: props.environmentVariables,
architecture: lambda.Architecture.ARM_64,
vpc: props.vpc,
timeout: Duration.seconds(300),
logRetention: props.environment === 'prod' ? logs.RetentionDays.FIVE_YEARS : logs.RetentionDays.ONE_DAY,
insightsVersion: lambda.LambdaInsightsVersion.VERSION_1_0_119_0,
tracing: lambda.Tracing.ACTIVE,
layers: props.layers,
bundling: {
cgoEnabled: true,
goBuildFlags: ['-ldflags "-s -w"','-trimpath'],
environment: {
"GOOS": "linux",
"GOARCH": "arm64",
...(process.platform == "linux") && { "CC": "aarch64-linux-gnu-gcc" },
...(process.platform == "darwin") && { "CC": "aarch64-unknown-linux-gnu-gcc" }
}
}
})
Expecting the built bootstrap
binary in the zip. I've already deleted contents of cdk.out
so not sure what the actual contents was.
Finally, I don't think this is reproducible, as I said it's a hit and miss when it happens. Let me know if I can provide any other info or help with troubleshooting.
This just happened with us as well using the aws_lambda_python_alpha.PythonFunction
construct.
It didn't happen again after deleting the cdk.out
directory and re-synthesizing.
OS: macOS Monterery v12.2.1 Node: v16.14.2 CDK: 2.10.0 (build e5b301f) Package Manager: pip
I had this issue occurring from aws_lambda_python_alpha.PythonFunction
. Somehow I had got into a state where cdk.out/asset.{hash}/
folder had the correct files, but there was a corresponding ZIP file uploaded to the CDK S3 artifacts bucket which was empty. It's possible this empty zip was uploaded due to me cancelling cdk deploy
at the wrong time.
I was able to resolve my error by deleting the empty ZIP file from S3 and deleting cdk.out
OS: Ubuntu 20.04.4 LTS Node: v14.18.3 CDK: 2.8.0 (build 8a5eb49) Package Manager: pip
Creating lambda layer:
layer = LayerVersion(
scope=self,
id='ExampleLayer',
code=Code.from_docker_build(path=f'{root}/lambda_layer')
)
Which contains this dockerfile:
FROM node:16.13.2
RUN ls
RUN mkdir -p /asset/bin/
RUN cp -L /usr/local/bin/node /asset/bin/node
RUN npm install --prefix /asset/bin aws-cdk@1.x
RUN ln -s /asset/bin/node_modules/aws-cdk/bin/cdk /asset/bin/cdk <--- This line breaks the deployment (not the build)
RUN /asset/bin/cdk --version
RUN /asset/bin/node --version
The line, where it creates a symbolic link (ln -s) breaks AWS CDK and it always produces an empty zip.
Also, on a fresh deployment (fresh build) I always get this error (when using symlinks):
AwsCdkServerlessStack: deploying... [0%] start: Publishing 816a3bd516eda114880e099f1dc8b2cc022b7f54f95537aab836507dac214120:current [0%] start: Publishing 4964d66ada9b47b1aa20846dd0bb38d6614bcdc356fb538b8d1e74a9c8a3d862:current [50%] success: Published 4964d66ada9b47b1aa20846dd0bb38d6614bcdc356fb538b8d1e74a9c8a3d862:current (node:1126) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, stat '/Users/laimonassutkus/Desktop/AwsCdkServerless/SourceCode/cdk.out/asset.816a3bd516eda114880e099f1dc8b2cc022b7f54f95537aab836507dac214120/bin/cdk' (Use
node --trace-warnings ...
to show where the warning was created) (node:1126) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag--unhandled-rejections=strict
(see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1) (node:1126) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Message in codebuild step of our cdk pipeline told me to come here. Happening to me now as well. Docker image: lambda.Runtime.NODEJS_14_X.bundlingImage Asset should contain Lambda code but it's empty.
*Using CDK v1.149.0
Is this file the culprit potentially?
assembly-CORESharedServicesCodePipeline-test-us-east-1-Deploy/.cache/03f68eef42c44051ca20172644612baf89e32096a165aedade31e26caefa45ca.zip
seems like it may be... I deleted that file and all its version from S3 and added a command to the Lambda Docker build container to delete the cdk.out folder before running cdk synth again but to no avail
I'm sure this isn't very helpful, but I'm starting to see this more and more often as the number of lambdas increases:
OS: OSX 12.3.1 (21E258) NodeJS Version: NODEJS_14_X, CLI Version: 2.22.0 Package manager: NPM 6.14.16 What is the asset supposed to contain: Appsync lambder handler Reproducible: Nope, it's happening randomly.
My issue was resolved... somehow on the initial deploy, Docker didn't have access to the node_modules in Lambda and empty assets were uploaded. I deleted the cdk.out folder locally, tore down the stacks and the pipeline, re-uploaded, and was good to go.
I saw the error simply when I had LayerVersion
pointing out to folder with subfolders but without files.
I had the same symptom today on a large Node 16 project using CDK directly (no Serverless or other framework). Deleting /tmp/cdk.out
resolved the issue.
I had this today, on a pretty small project:
OS version: Debian GNU/Linux 11
Nodejs version: 16.15.1
CLI version: 2.26.0 (build a409d63)
package manager: npm
what the asset is supposed to contain: Compiled JS from source Typescript files
reproducable: Went away after I deleted my cdk.out
directory
Annecdotally, I've been doing a lot of deploys, and had quite a lot of asset folders by the time this happened.
@LeeMartin77 & @thovden was it persistant until you deleted the cdk.out
dir? It's always been a once off when I've had it, (deploy again without changing anything will pass).
Wonder if deleting that dir will reduce the amount it happens.
For me it was persistent before deleting the cdk.out folder.
On Mon, Jun 6, 2022, 04:25 Aron @.***> wrote:
@LeeMartin77 https://github.com/LeeMartin77 & @thovden https://github.com/thovden was it persistant until you deleted the cdk.out dir? It's always been a once off when I've had it, (deploy again without changing anything will pass).
Wonder if deleting that dir will reduce the amount it happens.
— Reply to this email directly, view it on GitHub https://github.com/aws/aws-cdk/issues/18459#issuecomment-1146969450, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJUVAAFVRUVQSSX3QRIQQLVNVOQBANCNFSM5MB7RRGQ . You are receiving this because you were mentioned.Message ID: @.***>
I ended up clearing my folder before trying again, but can confirm clearing out the folder did make it work without error.
I also just got this issue - it happened after I ran a cdk destroy
, and when I did a deploy again I got this message.
Mac OS 12.1.4, MacBook Pro 14" 2021 M1 Pro NPM version 8.12.1 Node version 18.4 CDK version 2.31.1
I am building a CDK project all in Typescript - Lambdas are Typescript compiled using esbuild
.
I removed the cdk.out folder and tried again and that worked fine.
Just got this error message. I just ran the cdk deploy again, and it didn't have a empty asset file. I didn't even need to clear out the cdk.out folder.
Mac OS 12.4 Monterey MacBook Pro (16-inch, 2019) NPM: 8.12.1 Node: 18.4.0 CDK 2.31.1 Typescript: 4.7.4
CDK and Lambda Functions are coded in Typescript, and using NodeJsFunction, and using esbuild.
Got this error message when upgrading aws-cdk-lib version from 2.31.0 to 2.32.0.
I also get this error message when I run cdk deploy
then cancel it using CTRL + C
during the bundling stage.
When I try to redeploy, it appears to bundle everything again but during the CloudFormation deployment, this error message is shown and fails to deploy.
If I redeploy again, it bundles everything up and then deploys successfully.
It appears as though the second bundling attempt isn't successful?
I get this error (🚨 WARNING: EMPTY ZIP FILE 🚨) when trying to bootstrap a Golang package and upload lambda assets to s3. Is the golang issue mentioned in the beginning of this thread still not resolved? What are some workarounds?
MacOS Monterey 12.3.1 NodeJS v12.22.12 CDK: 2.33.0 Typescript: 4.2.4
I even tried by removing build/cdk.out directory
Same here
MacOS Monterey 12.3.1 NodeJS v16.13.1 CDK: 2.38.0 Typescript: 4.2.4
My Python Code does not get bundled even thogh I have a requirements.txt
and valid code
Interesting. I just got this running our ecs integration tests. ec2/integ.environment-file-integ.environment-file
I'm not sure if it's due to some sort of restructure, but this issue has become far more prevalent in the last week or so, around the same time as we updated cdk to v2.39.0
.
It's gone from happening ~2 times a week to 3 times a day, (deploying 10-30 times a day).
There is no rhyme or reason as to why this is occurring, and a re-deploy always resolves the issue, not need to delete any cache files.
This occurred for me using an internal build system (@quaeler) in which my CDK package built a lambda supplied by another package; both packages were in my local workspace, however the lambda package had been built clean and not built since.
Despite the Makefile
in cdk.out
having the correct formula (i.e cd in the lambda package, build, bats, ...), it did not perform this build. Building release in the lambda package and then attempting the ... cdk deploy ...
again succeeded.
Also encountered the error (🚨 WARNING: EMPTY ZIP FILE 🚨) now while performing cdk deploy
.
It had never happened before.
Now it occured once after I had aborted cdk deploy
in the middle of the process the last time before.
After I deployed the exact same code state to another AWS account and then again to the affected account, it did not occur again.
OS: MacOS Catalina 10.15.7 Runtime: NodeJS 16.15.1 Packager: yarn 1.22.19 + esbuild 0.14.42 AWS CLI: aws-cli/2.7.7 Python/3.9.13 Darwin/19.6.0 CDK: 2.26.0 (build a409d63)
Just got it again on my machine. Removing cdk.out
and retrying with no other code changes fixed it for me (as it did before as I believe I've posted in this thread before).
Mac OS 12.5.1 NodeJS 18.7.0 CDK 2.41.0 esbuild 0.14.54
I have issues today. my docker service (On windows) frozen and non responsive, the lambda packaging produce a empty zip file.
i have to clean the cdk.out and reboot my pc to get everything work again
Ok, I've narrowed down what's causing this for me.
Whenever I cancel a deployment for a stack (ctrl-c), the next time I deploy and it gets to that step, ~15% of the time I'll get this error. Redoing the deployment always resolves the issue. I've noticed this to be the case every time I've see it happen over the last two weeks.
Happened for the first time, then immediately again. CDK out was very full (like 4gb).
Windows 11, latest update. Node v16.14.2 CDK 2.41.0 (build 6ad48a3) NPM 8.18.0
After first instance I cleared out the cdk folder per the instructions (worth noting it hung at 99% for two minutes despite the folder being empty) , resynthed and deployed and it failed again. It failed again, I let it run that time, and it did in fact fail. Issue persisted after a restart. Fully deleting the cdk.out folder, and not merely it's contents (in case an invisible folder hung around), and recreating did not solve it. Just running the deploy again without any of the inbetween steps did nothing also. The issue occurred about six times in a row before I just gave up for the night since I'm like 4 hours over the work day anyway.
@abury Was the same in my case. So far the issue has only occurred after I have canceled a deployment before.
Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.
Ubuntu 20.04 on WSL Node v18.6.0 CDK 2.44.0 (build bf32cb1) NPM 8.13.2
asset was supposed to contain a lambda function. i thaught I caused it because of a recently emptied sub directory. creating files in all empty dirs has not solved it. neither did deleting the cdk.out directory .
git diff has revealed that the change causing this to happen was : I was creating a dependencies layer (by installing with pip into a directory and adding it as a lambda layer) for a function with an empty dependencies file.
removing the empty requirements file and conditioning the layer on finding it in the lambda directory fixed the issue for me.
Has this issue been really closed? I am getting the following error in the Assets step my of code pipeline. The same code works fine when I deploy locally via cdk deploy
error : [0%] fail: 🚨 WARNING: EMPTY ZIP FILE 🚨
--
43 | error : [0%] fail:
44 | error : [0%] fail: Zipping this asset produced an empty zip file. We do not know the root cause for this yet, and we need your help tracking it down.
dockerEnabledForSynth: true
is set for the pipeline
The build project runs on aws/codebuild/standard:5.0
Has this issue been really closed? I am getting the following error in the Assets step my of code pipeline. The same code works fine when I deploy locally via
cdk deploy
error : [0%] fail: 🚨 WARNING: EMPTY ZIP FILE 🚨 -- 43 | error : [0%] fail: 44 | error : [0%] fail: Zipping this asset produced an empty zip file. We do not know the root cause for this yet, and we need your help tracking it down.
dockerEnabledForSynth: true
is set for the pipelineThe build project runs on
aws/codebuild/standard:5.0
Yeah I'm definitely still seeing this issue in development. Totally get it's a really tough one to solve, but it might need to be re-opened sadly.
Has this issue been really closed? I am getting the following error in the Assets step my of code pipeline. The same code works fine when I deploy locally via
cdk deploy
error : [0%] fail: 🚨 WARNING: EMPTY ZIP FILE 🚨 -- 43 | error : [0%] fail: 44 | error : [0%] fail: Zipping this asset produced an empty zip file. We do not know the root cause for this yet, and we need your help tracking it down.
dockerEnabledForSynth: true
is set for the pipeline The build project runs onaws/codebuild/standard:5.0
Yeah I'm definitely still seeing this issue in development. Totally get it's a really tough one to solve, but it might need to be re-opened sadly.
@abury Just got mine fixed. I had to update the CDK CLI version to 2.46.0 (or newer).
In my case it was human error, too. My handler function had a non-default name (i.e., not handler
) and I had forgotten to specify the handler
property in the NodejsFunctionProps
accordingly.
A more precise error message in this case would be useful. For example: "No export 'handler' found in entry file".
What is the problem?
I updated lambda function dependancies and deploying the lambda function but it fails with following error.
I also have another api under the same project and I updated its lambda function dependancies and it was deployed successfully.
Both api's and its lambda functions are almost identical to each other. However only one gets deployed and another one doesn't.
I deleted the
cdk.out
folder and tried to deploy again and it fails with same error each time.Reproduction Steps
I have simple lambda function that I am trying to deploy as follows
What did you expect to happen?
I expected it to deploy all of my lambda functions.
What actually happened?
It failed with error
Uploaded file must be a non-empty zip
CDK CLI Version
2.8.0
Framework Version
No response
Node.js Version
v16.13.2
OS
Ubuntu 20.04 on WSL 2
Language
Typescript
Language Version
~3.9.7
Other information
No response