Open jakemraz opened 2 years ago
I've checked it more. I'm using CDK to deploy my lambda container, and I deployed at ap-northeast-2 which is not seemed to support arm based lambda runtime. My pc is mac M1 so it may dockerize the dockerfile with arm runtime. I guess this causes the problem. How do I dockerize my dockerfile with x86 runtime on my mac M1?
I've tested it more. I deployed my lambda container cdk project on x86 machine at ap-northeast-2, it works. But I deployed it on mac m1 at us-west-2 which supports arm runtime for lambda, it doesn't work with same error message 'exec format error'
We have the same issue with nodejs
interesting, my lambda node container works well. only lambda python container has a problem..
EDIT: Never mind, read too fast, sorry for the noise!
I can see the same error with "arm64" architecture. It works with the default "x86_64" type
If I understand you correctly, you could either set the lambda architecture to Arm64 in function props or make sure that your docker is building a x86_64. If you're building on a Mac M1 then i guess it will pull and build an arm64 image. You could do
DOCKER_DEFAULT_PLATFORM=linux/amd64 cdk deploy ...
Depending on how you specify the image, i think there are also options to set the platform directly in the CDK construct.
I have the same issue with M1 MacOS deployment. Same deployment works fine via Windows or Linux. Since I build the container image separately in CDK as shown below. I did two things to make it work: 1) add build args in the DockerImageAssset
, and force to use a x86_64 base image.
aws_ecr_assets.DockerImageAsset(
self, self._map_id("infra-runtime"), directory="./handlers", build_args={"--platform": "linux/amd64"}
)
FROM public.ecr.aws/lambda/python:3.8.2021.12.18.01-x86_64
COPY . .
Afterwards, I use cdk deploy
and it works on my M1 Mac.
I experienced the similar problem, here is my configuration
numpy==1.22.0
matplotlib==3.1.2
scipy==1.7.3
PyWavelets==1.1.1
pandas==0.25.3
sklearn==0.0
Cython==0.29.21
gunicorn==20.0.4
boto3==1.12.17
stopit==1.1.2
zstandard==0.14.0
s3fs==0.6.0
simplejson==3.17.2
termcolor==1.1.0
**ERROR**
ERROR: Command errored out with exit status 1: /var/lang/bin/python3.9 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-c_elvgk2/zstandard_2d84094997b84f3a84c5f39acfe7e937/setup.py'"'"'; file='"'"'/tmp/pip-install-c_elvgk2/zstandard_2d84094997b84f3a84c5f39acfe7e937/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-93olafiv/install-record.txt --single-version-externally-managed --home /tmp/pip-target-4c3u_lad --compile --install-headers /tmp/pip-target-4c3u_lad/include/python/zstandard Check the logs for full command output. WARNING: You are using pip version 21.2.4; however, version 21.3.1 is available. You should consider upgrading via the '/var/lang/bin/python3.9 -m pip install --upgrade pip' command.
I am trying to deploy from with the help of Serverless Framework and running it from a Gitlab job, the deploy is successful but when I run the lambda I get the same error: error: fork/exec /lambda-entrypoint.sh: exec format error
This are the variables that change to use arm on serverless:
# serverless.yml
...
provider:
name: aws
architecture: arm64
ecr:
images:
appimage:
path: ./
...
And the config of Dockerfile
FROM public.ecr.aws/lambda/nodejs:14
COPY . ${LAMBDA_TASK_ROOT}/
RUN npm install --target ${LAMBDA_TASK_ROOT}
I had the same issue, try tu build the docker image with --platform=linux/amd64
flag
docker build . --platform=linux/amd64
juan-pascale's fix worked for me, easy peasy, just add --platform=linux/amd64
arm and amd is different platform
I can confirm. Building a container on an ARM Mac Mini resulted in this error. It worked fine on my laptop which has ubuntu and a non-arm architecture. I only ever ran into this issue when switching to the computer with an arm processor.
This is still recurring regardless of the runtime. I have found it with node, python and jvm.
I should be possible to specify on a serverless config which arch you're building with docker build
.
EDIT: You have to force the platform
provider:
name: aws
architecture: x86_64
ecr:
# In this section you can define images that will be built locally and uploaded to ECR
images:
appimage:
path: ./
platform: linux/amd64
I ran into this but I solved it by adding a Python shebang to my app like #!/usr/bin/env python3
and using BOTH ENTRYPOINT [ "/var/task/app.py" ]
and CMD [ "app.handler" ]
in my Dockerfile. I think I could have skipped the shebang if I instead put this ENTRYPOINT [ "/usr/local/bin/python", "/var/task/app.py" ]
. It seems the dockerized Lambda needs a little help knowing how to execute the code. This does not match up with the AWS docs. However, I tried following the docs and this got it working.
For reference I saw this used here: https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-create-from-alt
Which is odd because I am using FROM public.ecr.aws/lambda/python:3.8-arm64
If someone is working with AWS CDK and deploying their code using DockerImageFunction
, this is what I had to do to make it work
const fakeFunction = new aws_lambda.DockerImageFunction(this, 'FakerFunction', {
code: aws_lambda.DockerImageCode.fromImageAsset(
path.join(__dirname, '..', '..', 'functions', 'fakedata'),
{
platform: Platform.LINUX_AMD64
}
),
});
Hello, I got a similar issue recently and I fixed it by checking the lambda architecture: in my case, the docker image was built on ARM, though lambda was set to x86_64, you can verify using the CLI command aws lambda get-function --function-name ...
. You can set the architecture in CDK like that:
aws_lambda.DockerImageFunction(self, "MyMSKFunction",
code=aws_lambda.DockerImageCode.from_image_asset(path.join(path.dirname("."), "app")),
vpc=vpc,
architecture=aws_lambda.Architecture.ARM_64
)
{
"Configuration": {
"FunctionName": "LambdaStack-1234",
"State": "Active",
"LastUpdateStatus": "Successful",
"PackageType": "Image",
"Architectures": [
"x86_64"
],
....
}
I have the same issue with M1 MacOS deployment. Same deployment works fine via Windows or Linux. Since I build the container image separately in CDK as shown below. I did two things to make it work: 1) add build args in the
DockerImageAssset
, and force to use a _x8664 base image.aws_ecr_assets.DockerImageAsset( self, self._map_id("infra-runtime"), directory="./handlers", build_args={"--platform": "linux/amd64"} )
FROM public.ecr.aws/lambda/python:3.8.2021.12.18.01-x86_64 COPY . .
Afterwards, I use
cdk deploy
and it works on my M1 Mac.
On my M1 MacBook Pro, all I needed was this in my Dockerfile:
FROM public.ecr.aws/lambda/python:3.9.2023.03.15.15-x86_64
I had no need for DockerImageAsset.
The solution from @little-eyes worked for me, using public.ecr.aws/lambda/python:3.8.2021.12.18.01-x86_64
.
I recently upgraded to Python3.10 and used public.ecr.aws/lambda/python:3.10
. When I deployed, I was getting the following error in the Lambda Runtime: Error: fork/exec /var/lang/bin/python3: exec format error Runtime.InvalidEntrypoint
.
The solution for me was to recorrect for the M1 vs x86 architecture: public.ecr.aws/lambda/python:3.10-x86_64
.
The list of available ecr images is here.
You need to force the PLATFORM as mentioned above
from aws_cdk import (
Stack,
aws_lambda,
aws_ecr_assets as ecr
)
from constructs import Construct
class LambdaMultiplatDemStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
my_function = aws_lambda.DockerImageFunction(
self,
"MyDifferentPlatformFn",
code=aws_lambda.DockerImageCode.from_image_asset(
".",
platform=ecr.Platform.LINUX_ARM64 # Magic Switch!
),
architecture=aws_lambda.Architecture.ARM_64
)
Any solutions other than forcing a different platform? I'm seeing this with v. 0.7.1
I am trying to deploy from with the help of Serverless Framework and running it from a Gitlab job, the deploy is successful but when I run the lambda I get the same error:
error: fork/exec /lambda-entrypoint.sh: exec format error
This are the variables that change to use arm on serverless:# serverless.yml ... provider: name: aws architecture: arm64 ecr: images: appimage: path: ./ ...
And the config of Dockerfile
FROM public.ecr.aws/lambda/nodejs:14 COPY . ${LAMBDA_TASK_ROOT}/ RUN npm install --target ${LAMBDA_TASK_ROOT}
this works! here are some pointers: https://medium.com/insiderengineering/deploying-aws-lambda-functions-for-machine-learning-workloads-def50b221139 https://repost.aws/questions/QUDoW9UeaJRcOooxwTwcHcsg/use-dockerfile-for-lambda-running-arm64-architecture
basically, you need to ensure that your built image, which is from your local machine, should have the same architecture as Lambda, since they should be the same; it happens to me before when I using github actions for CICD; where you should use cross-platform builds
I used public.ecr.aws/lambda/python:3.8 for my python runtime on lambda.
But today I found my lambda runtime doesn't work with below error message.
IMAGE Launch error: fork/exec /lambda-entrypoint.sh: exec format error Entrypoint: [/lambda-entrypoint.sh] Cmd: [handler.lambda_handler] WorkingDir: [/var/task]
My code was working before this commit (https://github.com/aws/aws-lambda-base-images/commit/97a295c42671cc2ed55ce1062faa59a17a2361d0)
But after applying this commit, my code doesn't work anymore.
Please check this out..