A ZIO-based AWS Custom Runtime compatible with GraalVM Native Image.
libraryDependencies += "dev.zio" %% "zio-json" % "0.6.2"
libraryDependencies += "dev.zio" %% "zio-lambda" % "1.0.5"
// Optional dependencies
libraryDependencies += "dev.zio" %% "zio-lambda-event" % "1.0.5"
libraryDependencies += "dev.zio" %% "zio-lambda-response" % "1.0.5"
Create your Lambda function by providing it to ZLambdaRunner.serve(...)
method.
import zio.Console._
import zio._
import zio.lambda._
object SimpleHandler extends ZIOAppDefault {
def app(request: KinesisEvent, context: Context) = for {
_ <- printLine(event.message)
} yield "Handler ran successfully"
override val run =
ZLambdaRunner.serve(app)
}
zio-lambda depends on zio-json for decoding any event you send to it and enconding any response you send back to the Lambda service. You can either create your own data types or use the ones that are included in zio-lambda-event and zio-lambda-response.
The last step is to define the way your function will be invoked. There are three ways, detailed below:
Upload zio-lambda as a Lambda Layer Each release will contain a zip file ready to be used as a lambda layer) and your function. Instructions coming soon!
Create an AWS Lambda function and choose the runtime where you provide your own bootstrap on Amazon Linux 2
Run sbt GraalVMNativeImage/packageBin
, we'll find the binary present under the graalvm-native-image
folder:
Create the following bootstap file (which calls out to the binary) and place it in the same directory alongside the binary:
#!/usr/bin/env bash
set -euo pipefail
./zio-lambda-example
Now we can zip both these files up:
> pwd
/home/cal/IdeaProjects/zio-lambda/lambda-example/target/graalvm-native-image
> zip upload.zip bootstrap zio-lambda-example
Take upload.zip
and upload it to AWS Lambda and test your function:
Test everything out to make sure everything works:
Following the steps from Direct deployment of native image binary
to produce your native image binary, we can package
up the native binary into a Docker image and deploy it like that to AWS Lambda.
FROM gcr.io/distroless/base-debian12
COPY lambda-example/target/graalvm-native-image/zio-lambda-example /app/zio-lambda-example
CMD ["/app/zio-lambda-example"]
NOTE: This Dockerfile is meant to build the lambda-example located in the zio-lambda project and the Dockerfile is placed in the zio-lambda-repository. You will need to adjust this Dockerfile to match your project needs.
Now we can build and tag the Docker image:
docker build -t native-image-binary .
Take this image and push it to AWS ECR:
pass=$(aws ecr get-login-password --region us-east-1)
docker login --username AWS --password $pass <your_AWS_ECR_REPO>
docker tag native-image-binary <your-particular-ecr-image-repository>:<your-tag>
docker push <your-particular-ecr-image-repository>:<your-tag>
Here is an example:
Create a Lambda function and choose container image:
Please note that because you incur the overhead of your native binary residing within a Docker container, there is more overhead than the other approach of deploying the binary straight to AWS Lambda
Learn more on the ZIO Lambda homepage!
For the general guidelines, see ZIO contributor's guide.
See the Code of Conduct