paketo-buildpacks / native-image

A Cloud Native Buildpack that creates native images from Java applications
Apache License 2.0
49 stars 9 forks source link

aws lambda runtime linking issue - /lib64/libc.so.6: version `GLIBC_2.3X' not found #217

Open cforce opened 1 year ago

cforce commented 1 year ago

Building a spring cloud function native executable using

Step to reproduce: "mvn -Pnative package" my spring cloud function app on my WSLES ubuntu 22.04 shell or "mvn -Pnative,nativeRS,lambda package spring-boot:build-image

deploy zip file to aws lamda
execute lambda function
cloud-function-dynamodb-lambda: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by ./cloud-function-dynamodb-lambda)
./cloud-function-dynamodb-lambda: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by ./cloud-function-dynamodb-lambda)
./cloud-function-dynamodb-lambda: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by ./cloud-function-dynamodb-lambda)
./cloud-function-dynamodb-lambda: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by ./cloud-function-dynamodb-lambda)
START RequestId: 054fd135-79e2-4ec7-b080-11820822a134 Version: $LATEST
RequestId: 054fd135-79e2-4ec7-b080-11820822a134 Error: Runtime exited with error: exit status 1
Runtime.ExitError
END RequestId: 054fd135-79e2-4ec7-b080-11820822a134
REPORT RequestId: 054fd135-79e2-4ec7-b080-11820822a134  Duration: 75.74 ms  Billed Duration: 76 ms  Memory Size: 128 MB Max Memory Used: 4 MB   
XRAY TraceId: 1-639994f6-6fc8489d57c0cd106bdbaee8   SegmentId: 11040f061038b2f9 Sampled: true   

IT seems like the existing buildpack used is not providing the same? GLIBC version which is offered used by the aws lambda provided runtime. Is there any buildback which provides a aws lambda compatible environment to build a graal native image which can be executed published as part of zipped package requesting for dynamic shared lib objects being available on aws provided runtime?

Some ideas i a, starting to investigate..

Assumption: The issue i have with GLIBC dynamic linking on aws lambda at runtime might only be there when i built with my local ubuntu 22.04 or the "tiny builder" cascade for above buidpacks? I tried lambda runtime provided and provided.al2 (https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html) which - as i thought has newer versions available - but still loading issue occurs. I tried as well to use compiler switch on and off -H:+StaticExecutableWithDynamicLibC - same issue. Todo: Try to use the same image – amzn-ami-hvm-2018.03.0.20220802.0-x86_64-gp2 or similar (amazonlinux2) to build the native image on assuming it provides the same libc so linking which is needed on lambda works later Another option is to package as container (instead of zip) where this container image (composed into a lambda layer?) brings its onw shared libs and the native executable on board - might not work together and might have start time latency downside.

dmikusa commented 1 year ago

What is the builder and stack that you're using? If you could include the full build output from the buildpacks part of the build, that would be helpful context as well. Thanks

dmikusa commented 1 year ago

For what it's worth, at the moment the buildpacks can create a mostly static binary, but not a fully static binary. The binary will depend on GLIBC.

I don't know what version of GLIBC is available in your AWS environment, but you do need GLIBC. You can't run in, for example, a scratch container. If you can specify the container, I would strongly suggest using the Paketo-provided run image. The tiny run image is quite small.

Your mileage may vary with other custom run images. My experience has been that if GLIBC is the same or a newer version as the environment in which the binary was built, you're typically OK. On the other hand, my experience has been that the reverse typically causes the error you're seeing.

I don't know if this is the case, but to be clear, we cannot support the use case of building native-image binaries inside of buildpacks, extracting them from the container, and running them elsewhere. That may or may not work, but it's out the scope of what is intended with buildpacks so swim at your own risk there.

If you just want the binary and not the container images, I would suggest building the native image binary with Native Image Build tools not buildpacks.

cforce commented 1 year ago

I suspect that the Linux which is used as base for lambda is providing only GLIBC <2.32The aws lambda website mentioned above pretends to use this image to run amazon/amzn-ami-hvm-2018.03.0.20220802.0-x86_64-gp2 which I was not able to download yet somewhere but initial release year 2018 sounds pretty old. I found this docs for it

https://aws.amazon.com/amazon-linux-ami/2018.03-release-notes/?nc1=h_ls

https://aws.amazon.com/amazon-linux-ami/?nc1=h_ls

The last update seems to be some months ago and is Amazon Linux 2018.03.0.20220419.0

Updated Packages:

• glibc-2.17-324.189.amzn1.x86_64

I will try to build with amazonlinux and same or lower glibc version. However a builpack which is syn with lambda would make sense especially for spring cloud function aws adapter. Users can do so much wrong on the native path an lambda is a big serverless player where Graal and Java will as well make sense especially.

dmikusa commented 1 year ago

I will try to build with amazonlinux and same or lower glibc version. However a builpack which is syn with lambda would make sense especially for spring cloud function aws adapter. Users can do so much wrong on the native path an lambda is a big serverless player where Graal and Java will as well make sense especially.

I'm not sure I completely understand what the request is here.

If you use buildpacks to generate a native image, you're generating a container image that contains the native image binary. If you provide that container image to Amazon, they should be running that image, not pieces of your image on top of another image. If they are doing that, it's 100% wrong and against the spirit of container images.

If they are running the container image properly, then it shouldn't matter what GLIBC version they provide because the one used will be the one that's in the container & Paketo buildpacks will ensure you get the right version. Paketo does this by providing run images that are in sync with the build-time images, so the library versions match.

If Amazon requires you to use a different base image for your container, that's fine, but it appears like that would require a custom buildpacks stack (build + run image) because the image they are requiring you to use doesn't have a new enough version of GLIBC.

You can make your own stack with the instructions here. You would then need to make your own builder from the stack, instructions here. Lastly, change your application to use your custom builder.

The Java buildpacks should work on most custom stacks. They'll install the JVM & native image tools & run your builds. The only issue would be if those tools require a non-compatible version of GLIBC and I don't know off the top of my head what version they require.

cforce commented 1 year ago

To run native compiled one lambda I use runtime provided https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html Part of the zip uploaded is only the native compiled executable and a bootstrap script that starts it.. no docker image. But my idea was indeed to use a lambda compatible builpack just as an builder env to produce the executable with a ark contained tool stack and finally copy the created executable from inside the image to be used in the lambda zip package. I understood that the build pack here was not meant to be used in such scenario but buildoacks in general are

dmikusa commented 1 year ago

OK, I understand now.

I think what you were trying would work but the base image there seems to be pretty old. The Paketo images are based on Ubuntu Bionic (going out of support soon) and Ubuntu Jammy.

You could probably create build/run image & builder from the Amazon environment, if they have a container image of it it shouldn't be too hard or from something like an older CentOS environment that has an older GLIBC, but you'd have to be the judge on that being worth the effort.

Off the top of my head, the only other buildpack-based solution would be doing a fully static build, but I don't know if that is something we can support in buildpacks, at least not in the short term. It requires a specific set of build tools, so it would be some additional work to get those installed, probably a new buildpack, and modifications to native-image buildpack to build with those tools. It's not something that would necessarily be quick for us to implement. If it is of interest, I would suggest adding an item to the Paketo 2023 Roadmap discussion. If it gets some traction that would help us to prioritize it.

cforce commented 1 year ago

@dmikusa

What is the builder and stack that you're using? If you could include the full build output from the buildpacks part of the build..

https://github.com/spring-cloud/spring-cloud-function/issues/972

would suggest adding an item to the Paketo 2023 Roadmap discussion. If it gets some traction that would help us to prioritize it. https://github.com/orgs/paketo-buildpacks/discussions/58#discussioncomment-4419519

Creation of native images using buildpacks is a container layer concept as i undertand $mvn -Pnative package spring-boot:build-image ....

[INFO]     [creator]     6 of 14 buildpacks participating
[INFO]     [creator]     paketo-buildpacks/ca-certificates   3.5.1
[INFO]     [creator]     paketo-buildpacks/bellsoft-liberica 9.10.1
[INFO]     [creator]     paketo-buildpacks/syft              1.23.0
[INFO]     [creator]     paketo-buildpacks/executable-jar    6.5.0
[INFO]     [creator]     paketo-buildpacks/spring-boot       5.20.0
[INFO]     [creator]     paketo-buildpacks/native-image      5.6.0

Which one is the one providing the GLIBC - i assume its "bellsoft-liberica". Couldn't i simply exchange the "execution environment base" and provide and maintain channel for different GLIBC versions? See https://github.com/bell-sw/Liberica/blob/master/docker/repos/liberica-openjdk-alpine/17/Dockerfile

FROM debian:10-slim as glibc-base

If i would be allowed to choose from another/different "glibc-base" image ....maybe just be allowed to override the "debian:10-slim" static image name by using an docker env variable , defaulted to recent "debian:XX-slim" but able to override` by user. Maybe this is even already possible? E.g. setting up my own buildpack composite where i can have my custom bellsoft-liberica GLIBC base?

            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <configuration>
                    <classifier>${repackage.classifier}</classifier>
                    
                </configuration>
            </plugin>

Maybe its something (to be) hosted in https://github.com/paketo-buildpacks/spring-boot/ ?

dmikusa commented 1 year ago

The stack is a name for the build and run images. These are the base images for the builder and your application image. The builder is the image that forms the container environment when you run the actual build. It includes the build image plus all of the buildpacks.

So when you ask, what provides glibc it is the stack. At build time, it's the build time and at run time, it's the run image.

By default, Spring Boot will use the Paketo tiny builder, which in turn uses the Paketo tiny stack. The Paketo tiny stack is based off of Ubuntu Bionic (although we have a Jammy stack available as well).

You can absolutely change out this stack for another stack. That is what I suggested in my previous comments:

You can make your own stack with the instructions here. You would then need to make your own builder from the stack, instructions here. Lastly, change your application to use your custom builder.

Making a stack is not technically difficult, but like any container image, it will require automation so that you can keep your stack up-to-date.

Hope that helps!