:toc: :icons: font :source-highlighter: prettify
= Spring Boot in a Container
Many people use containers to wrap their Spring Boot applications, and building containers is not a simple thing to do. This is a guide for developers of Spring Boot applications, and containers are not always a good abstraction for developers. They force you to learn about and think about low-level concerns. However, you may on occasion be called on to create or use a container, so it pays to understand the building blocks. In this guide, we aim to show you some of the choices you can make if you are faced with the prospect of needing to create your own container.
We assume that you know how to create and build a basic Spring Boot application. If not, go to one of the https://spring.io/guides[Getting Started Guides] -- for example, the one on building a https://spring.io/guides/gs/rest-service/[REST Service]. Copy the code from there and practice with some of the ideas contained in this guide.
NOTE: There is also a Getting Started Guide on https://spring.io/guides/gs/spring-boot-docker[Docker], which would also be a good starting point, but it does not cover the range of choices that we cover here or cover them in as much detail.
== A Basic Dockerfile
A Spring Boot application is easy to convert into an executable JAR file. All the https://spring.io/guides[Getting Started Guides] do this, and every application that you download from https://start.spring.io[Spring Initializr] has a build step to create an executable JAR. With Maven, you run ./mvnw install
, With Gradle, you run ./gradlew build
. A basic Dockerfile to run that JAR would then look like this, at the top level of your project:
Dockerfile
====
You could pass in the JAR_FILE
as part of the docker
command (it differs for Maven and Gradle). For Maven, the following command works:
====
For Gradle, the following command works:
====
Once you have chosen a build system, you don't need the ARG
. You can hard code the JAR location. For Maven, that would be as follows:
Dockerfile
====
Then we can build an image with the following command:
====
Then we can run it by running the following command:
====
. _ _ /\ / '_ () \ \ \ \ ( ( )\ | ' | '| | ' \/ ` | \ \ \ \ \/ _)| |)| | | | | || (| | ) ) ) ) ' |__| .|| ||| |\, | / / / / =========|_|==============|__/=//// :: Spring Boot :: (v2.7.4)
====
If you want to poke around inside the image, you can open a shell in it by running the following command (note that the base image does not have bash
):
====
The output is similar to the following sample output:
====
NOTE: The alpine base container we used in the example does not have bash
, so this is an ash
shell. It has some but not all of the features of bash
.
If you have a running container and you want to peek into it, you can do so by running docker exec
:
====
where myapp
is the --name
passed to the docker run
command. If you did not use --name
, docker assigns a mnemonic name, which you can get from the output of docker ps
. You could also use the SHA identifier of the container instead of the name. The SHA identifier is also visible in the docker ps
output.
=== The Entry Point
The https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example[exec form] of the Dockerfile ENTRYPOINT
is used so that there is no shell wrapping the Java process. The advantage is that the java process responds to KILL
signals sent to the container. In practice, that means (for instance) that, if you docker run
your image locally, you can stop it with CTRL-C
. If the command line gets a bit long, you can extract it out into a shell script and COPY
it into the image before you run it. The following example shows how to do so:
Dockerfile
====
Remember to use exec java ...
to launch the java process (so that it can handle the KILL
signals):
run.sh
====
Another interesting aspect of the entry point is whether or not you can inject environment variables into the Java process at runtime. For example, suppose you want to have the option to add Java command line options at runtime. You might try to do this:
Dockerfile
====
Then you might try the following commands:
docker build -t myorg/myapp .
docker run -p 9000:9000 -e JAVA_OPTS=-Dserver.port=9000 myorg/myapp
This fails because the ${}
substitution requires a shell. The exec form does not use a shell to launch the process, so the options are not applied. You can get around that by moving the entry point to a script (like the run.sh
example shown earlier) or by explicitly creating a shell in the entry point. The following example shows how to create a shell in the entry point:
Dockerfile
====
You can then launch this app by running the following command:
====
That command produces output similar to the following:
. _ _ /\ / '_ () \ \ \ \ ( ( )\ | ' | '| | ' \/ ` | \ \ \ \ \/ _)| |)| | | | | || (| | ) ) ) ) ' |__| .|| ||| |\, | / / / / =========|_|==============|__/=//// :: Spring Boot :: (v2.7.4) ... 2019-10-29 09:12:12.169 DEBUG 1 --- [ main] ConditionEvaluationReportLoggingListener :
====
(The preceding output shows parts of the full DEBUG
output that is generated with -Ddebug
by Spring Boot.)
Using an ENTRYPOINT
with an explicit shell (as the preceding example does) means that you can pass environment variables into the Java command. So far, though, you cannot also provide command line arguments to the Spring Boot application. The following command does not run the application on port 9000:
====
That command produces the following output, which shows the port as 8080 rather than 9000:
====
It did not work because the docker command (the --server.port=9000
part) is passed to the entry point (sh
), not to the Java process that it launches. To fix that, you need to add the command line from the CMD
to the ENTRYPOINT
:
Dockerfile
====
Then you can run the same command and set the port to 9000:
====
As the following output sampe shows, the port does get set to 9000:
====
Note the use of ${0}
for the "command
" (in this case the first program argument) and ${@}
for the "command arguments
" (the rest of the program arguments). If you use a script for the entry point, then you do not need the ${0}
(that would be /app/run.sh
in the earlier example). The following list shows the proper command in a script file:
run.sh
====
The docker configuration is very simple so far, and the generated image is not very efficient. The docker image has a single filesystem layer with the fat JAR in it, and every change we make to the application code changes that layer, which might be 10MB or more (even as much as 50MB for some applications). We can improve on that by splitting the JAR into multiple layers.
=== Smaller Images
Notice that the base image in the earlier example is eclipse-temurin:17-jdk-alpine
. The alpine
images are smaller than the standard eclipse-temurin
library images from https://hub.docker.com/_/eclipse-temurin/[Dockerhub]. You can also save about 20MB in the base image by using the jre
label instead of jdk
. Not all applications work with a JRE (as opposed to a JDK), but most do. Some organizations enforce a rule that every application has to work with a JRE because of the risk of misuse of some of the JDK features (such as compilation).
Another trick that could get you a smaller image is to use https://openjdk.java.net/projects/jigsaw/quick-start#linker[JLink], which is bundled with OpenJDK 11 and above. JLink lets you build a custom JRE distribution from a subset of modules in the full JDK, so you do not need a JRE or JDK in the base image. In principle, this would get you a smaller total image size than using the official docker images. In practice a custom JRE in your own base image cannot be shared among other applications, since they would need different customizations. So you might have smaller images for all your applications, but they still take longer to start because they do not benefit from caching the JRE layer.
That last point highlights a really important concern for image builders: the goal is not necessarily always going to be to build the smallest image possible. Smaller images are generally a good idea because they take less time to upload and download, but only if none of the layers in them are already cached. Image registries are quite sophisticated these days and you can easily lose the benefit of those features by trying to be clever with the image construction. If you use common base layers, the total size of an image is less of a concern, and it is likely to become even less of a concern as the registries and platforms evolve. Having said that, it is still important, and useful, to try to optimize the layers in our application image. However, the goals should always be to put the fastest changing stuff in the highest layers and to share as many of the large, lower layers as possible with other applications.
[[a-better-dockerfile]] == A Better Dockerfile
A Spring Boot fat JAR naturally has "layers
" because of the way that the JAR itself is packaged. If we unpack it first, it is already divided into external and internal dependencies. To do this in one step in the docker build, we need to unpack the JAR first. The following commands (sticking with Maven, but the Gradle version is pretty similar) unpack a Spring Boot fat JAR:
====
Then we can use the following Dockerfile
Dockerfile
====
There are now three layers, with all the application resources in the later two layers. If the application dependencies do not change, the first layer (from BOOT-INF/lib
) need not change, so the build is faster, and the startup of the container at runtime if also faster, as long as the base layers are already cached.
NOTE: We used a hard-coded main application class: hello.Application
. This is probably different for your application. You could parameterize it with another ARG
if you wanted. You could also copy the Spring Boot fat JarLauncher
into the image and use it to run the application. It would work and you would not need to specify the main class, but it would be a bit slower on startup.
=== Spring Boot Layer Index
Starting with Spring Boot 2.3.0, a JAR file built with the Spring Boot Maven or Gradle plugin includes https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#features.container-images.layering[layer information] in the JAR file. This layer information separates parts of the application based on how likely they are to change between application builds. This can be used to make Docker image layers even more efficient.
The layer information can be used to extract the JAR contents into a directory for each layer:
====
Then we can use the following Dockerfile
:
Dockerfile
====
NOTE: The Spring Boot fat JarLauncher
is extracted from the JAR into the image, so it can be used to start the application without hard-coding the main application class.
See the https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#features.container-images.building.dockerfiles[Spring Boot documentation] for more information on using the layering feature.
== Tweaks
If you want to start your application as quickly as possible (most people do), you might consider some tweaks:
spring-context-indexer
(https://docs.spring.io/spring/docs/current/spring-framework-reference/core.html#beans-scanning-index[link to docs]). It is not going to add much for small applications, but every little helps.spring.config.location
(by command line argument, System property, or other approach).Your application might not need a full CPU at runtime, but it does need multiple CPUs to start up as quickly as possible (at least two, four is better). If you do not mind a slower startup, you could throttle the CPUs down below four. If you are forced to start with less than four CPUs, it might help to set -Dspring.backgroundpreinitializer.ignore=true
, since it prevents Spring Boot from creating a new thread that it probably cannot use (this works with Spring Boot 2.1.0 and above).
== Multi-Stage Build
The Dockerfile
shown in <
Dockerfile
FROM eclipse-temurin:17-jdk-alpine as build WORKDIR /workspace/app
COPY mvnw . COPY .mvn .mvn COPY pom.xml . COPY src src
RUN ./mvnw install -DskipTests RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
====
The first image is labelled build
, and it is used to run Maven, build the fat JAR, and unpack it. The unpacking could also be done by Maven or Gradle (this is the approach taken in the Getting Started Guide). There is not much difference, except that the build configuration would have to be edited and a plugin added.
Notice that the source code has been split into four layers. The later layers contain the build configuration and the source code for the application, and the earlier layers contain the build system itself (the Maven wrapper). This is a small optimization, and it also means that we do not have to copy the target
directory to a docker image, even a temporary one used for the build.
Every build where the source code changes is slow because the Maven cache has to be re-created in the first RUN
section. But you have a completely standalone build that anyone can run to get your application running as long as they have docker. That can be quite useful in some environments -- for example, where you need to share your code with people who do not know Java.
=== Experimental Features
Docker 18.06 comes with some https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md["`experimental" features], including a way to cache build dependencies. To switch them on, you need a flag in the daemon (
dockerd) and an environment variable when you run the client. Then you can add a "
magic" first line to your
Dockerfile`:
Dockerfile
====
The RUN
directive then accepts a new flag: --mount
. The following listing shows a full example:
Dockerfile
FROM eclipse-temurin:17-jdk-alpine as build WORKDIR /workspace/app
COPY mvnw . COPY .mvn .mvn COPY pom.xml . COPY src src
RUN --mount=type=cache,target=/root/.m2 ./mvnw install -DskipTests RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
====
Then you can run it:
====
The following listing shows sample output:
====
With the experimental features, you get different output on the console, but you can see that a Maven build now only takes a few seconds instead of minutes, provided the cache is warm.
The Gradle version of this Dockerfile
configuration is very similar:
Dockerfile
FROM eclipse-temurin:17-jdk-alpine AS build WORKDIR /workspace/app
COPY . /workspace/app RUN --mount=type=cache,target=/root/.gradle ./gradlew clean build RUN mkdir -p build/dependency && (cd build/dependency; jar -xf ../libs/*-SNAPSHOT.jar)
====
NOTE: While these features are in the experimental phase, the options for switching buildkit on and off depend on the version of docker
that you use. Check the documentation for the version you have (the example shown earlier is correct for docker
18.0.6).
== Security Aspects
Just as in classic VM deployments, processes should not be run with root permissions. Instead, the image should contain a non-root user that runs the application.
In a Dockerfile
, you can achieve this by adding another layer that adds a (system) user and group and setting it as the current user (instead of the default, root):
Dockerfile
FROM eclipse-temurin:17-jdk-alpine
RUN addgroup -S demo && adduser -S demo -G demo USER demo
====
In case someone manages to break out of your application and run system commands inside the container, this precaution limits their capabilities (following the principle of least privilege).
NOTE: Some of the further Dockerfile
commands only work as root, so maybe you have to move the USER command further down (for example, if you plan to install more packages in the container, which works only as root).
NOTE: For other approaches, not using a Dockerfile
might be more amenable. For instance, in the buildpack approach described later, most implementations use a non-root user by default.
Another consideration is that the full JDK is probably not needed by most applications at runtime, so we can safely switch to the JRE base image, once we have a multi-stage build. So, in the multi-stage build shown earlier we can use for the final, runnable image:
Dockerfile
FROM eclipse-temurin:17-jre-alpine
====
As mentioned earlier, this also saves some space in the image, which would be occupied by tools that are not needed at runtime.
== Build Plugins
If you do not want to call docker
directly in your build, there is a rich set of plugins for Maven and Gradle that can do that work for you. Here are just a few.
=== Spring Boot Maven and Gradle Plugins
You can use the Spring Boot build plugins for https://docs.spring.io/spring-boot/docs/current/maven-plugin/reference/htmlsingle/#build-image[Maven] and https://docs.spring.io/spring-boot/docs/current/gradle-plugin/reference/htmlsingle/#build-image[Gradle] to create container images.
The plugins create an OCI image (the same format as one created by docker build
) by using https://buildpacks.io/[Cloud Native Buildpacks].
You do not need a Dockerfile
, but you do need a Docker daemon, either locally (which is what you use when you build with docker) or remotely through the DOCKER_HOST
environment variable.
The default builder is optimized for Spring Boot applications, and the image is layered efficiently as in the examples above.
The following example works with Maven without changing the pom.xml
file:
====
The following example works with Gradle, without changing the build.gradle
file:
====
The first build might take a long time because it has to download some container images and the JDK, but subsequent builds should be fast.
Then you can run the image, as the following listing shows (with output):
====
You can see the application start up as normal. You might also notice that the JVM memory requirements were computed and set as command line options inside the container. This is the same memory calculation that has been in use in Cloud Foundry build packs for many years. It represents significant research into the best choices for a range of JVM applications, including but not limited to Spring Boot applications, and the results are usually much better than the default setting from the JVM. You can customize the command line options and override the memory calculator by setting environment variables as shown in the https://paketo.io/docs/howto/java/[Paketo buildpacks documentation].
=== Spotify Maven Plugin
The https://github.com/spotify/dockerfile-maven[Spotify Maven Plugin] is a popular choice. It requires you to write a Dockerfile
and then runs docker
for you, just as if you were doing it on the command line. There are some configuration options for the docker image tag and other stuff, but it keeps the docker knowledge in your application concentrated in a Dockerfile
, which many people like.
For really basic usage, it will work out of the box with no extra configuration:
====
That builds an anonymous docker image. We can tag it with docker
on the command line now or use Maven configuration to set it as the repository
. The following example works without changing the pom.xml
file:
====
Alternatively, you change the pom.xml
file:
pom.xml
====
=== Palantir Gradle Plugin
The https://github.com/palantir/gradle-docker[Palantir Gradle Plugin] works with a Dockerfile
and can also generate a Dockerfile
for you. Then it runs docker
as if you were running it on the command line.
First you need to import the plugin into your build.gradle
:
build.gradle
====
Then, finally, you can apply the plugin and call its task:
build.gradle
apply plugin: 'com.palantir.docker'
group = 'myorg'
bootJar { baseName = 'myapp' version = '0.1.0' }
====
In this example, we have chosen to unpack the Spring Boot fat JAR in a specific location in the build
directory, which is the root for the docker build. Then the multi-layer (not multi-stage) Dockerfile
shown earlier works.
=== Jib Maven and Gradle Plugins
Google has an open source tool called https://github.com/GoogleContainerTools/jib[Jib] that is relatively new but quite interesting for a number of reasons. Probably the most interesting thing is that you do not need docker to run it. Jib builds the image by using the same standard output as you get from docker build
but does not use docker
unless you ask it to, so it works in environments where docker is not installed (common in build servers). You also do not need a Dockerfile
(it would be ignored anyway) or anything in your pom.xml
to get an image built in Maven (Gradle would require you to at least install the plugin in build.gradle
).
Another interesting feature of Jib is that it is opinionated about layers, and it optimizes them in a slightly different way than the multi-layer Dockerfile
created above. As in the fat JAR, Jib separates local application resources from dependencies, but it goes a step further and also puts snapshot dependencies into a separate layer, since they are more likely to change. There are configuration options for customizing the layout further.
The following example works with Maven without changing the pom.xml
:
====
To run that command, you need to have permission to push to Dockerhub under the myorg
repository prefix. If you have authenticated with docker
on the command line, that works from your local ~/.docker
configuration. You can also set up a Maven "server
" authentication in your ~/.m2/settings.xml
(the id
of the repository is significant):
settings.xml
<server>
<id>registry.hub.docker.com</id>
<username>myorg</username>
<password>...</password>
</server>
====
There are other options -- for example, you can build locally against a docker daemon (like running docker
on the command line), using the dockerBuild
goal instead of build
. Other container registries are also supported. For each one, you need to set up local authentication through Docker or Maven settings.
The gradle plugin has similar features, once you have it in your build.gradle
:.
build.gradle
====
Then you can build an image by running the following command:
====
As with the Maven build, if you have authenticated with docker
on the command line, the image push authenticates from your local ~/.docker
configuration.
== Continuous Integration
Automation (or should be) is part of every application lifecycle these days. The tools that people use to do the automation tend to be quite good at invoking the build system from the source code. So if that gets you a docker image, and the environment in the build agents is sufficiently aligned with developer's own environment, that might be good enough. Authenticating to the docker registry is likely to be the biggest challenge, but there are features in all the automation tools to help with that.
However, sometimes it is better to leave container creation completely to an automation layer, in which case the user's code might not need to be polluted. Container creation is tricky, and developers sometimes need not really care about it. If the user code is cleaner, there is more chance that a different tool can "do the right thing
" (applying security fixes, optimizing caches, and so on). There are multiple options for automation, and they all come with some features related to containers these days. We are going to look at a couple.
=== Concourse
https://concourse-ci.org[Concourse] is a pipeline-based automation platform that you can use for CI and CD. It is used inside VMware, and the main authors of the project work there. Everything in Concourse is stateless and runs in a container, except the CLI. Since running containers is the main order of business for the automation pipelines, creating containers is well supported. The https://github.com/concourse/docker-image-resource[Docker Image Resource] is responsible for keeping the output state of your build up to date, if it is a container image.
The following example pipeline builds a docker image for the sample shown earlier, assuming it is in github at myorg/myapp
, has a Dockerfile
at the root, and has a build task declaration in src/main/ci/build.yml
:
resources:
jobs:
====
The structure of a pipeline is very declarative: You define "resources
" (input, output, or both), and "jobs
" (which use and apply actions to resources). If any of the input resources changes, a new build is triggered. If any of the output resources changes during a job, it is updated.
The pipeline could be defined in a different place than the application source code. Also, for a generic build setup, the task declarations can be centralized or externalized as well. This allows some separation of concerns between development and automation, which suits some software development organizations.
=== Jenkins
https://jenkins.io[Jenkins] is another popular automation server. It has a huge range of features, but one that is the closest to the other automation samples here is the https://jenkins.io/doc/book/pipeline/docker/[pipeline] feature. The following Jenkinsfile
builds a Spring Boot project with Maven and then uses a Dockerfile
to build an image and push it to a repository:
Jenkinsfile
====
For a (realistic) docker repository that needs authentication in the build server, you can add credentials to the docker
object by using docker.withCredentials(...)
.
== Buildpacks
NOTE: The Spring Boot Maven and Gradle plugins use buildpacks in exactly the same way that the pack
CLI does in the following examples.
The resulting images are identical, given the same inputs.
https://www.cloudfoundry.org/[Cloud Foundry] has used containers internally for many years now, and part of the technology used to transform user code into containers is Build Packs, an idea originally borrowed from https://www.heroku.com/[Heroku]. The current generation of buildpacks (v2) generates generic binary output that is assembled into a container by the platform. The https://buildpacks.io/[new generation of buildpacks] (v3) is a collaboration between Heroku and other companies (including VMware), and it builds container images directly and explicitly. This is interesting for developers and operators. Developers do not need to care much about the details of how to build a container, but they can easily create one if they need to. Buildpacks also have lots of features for caching build results and dependencies. Often, a buildpack runs much more quickly than a native Docker build. Operators can scan the containers to audit their contents and transform them to patch them for security updates. Also, you can run the buildpacks locally (for example, on a developer machine or in a CI service) or in a platform like Cloud Foundry.
The output from a buildpack lifecycle is a container image, but you do not need a Dockerfile
. The filesystem layers in the output image are controlled by the buildpack. Typically, many optimizations are made without the developer having to know or care about them. There is also an https://en.wikipedia.org/wiki/Application_binary_interface[Application Binary Interface] between the lower level layers (such as the base image containing the operating system) and the upper layers (containing middleware and language specific dependencies). This makes it possible for a platform, such as Cloud Foundry, to patch lower layers if there are security updates without affecting the integrity and functionality of the application.
To give you an idea of the features of a buildpack, the following example (shown with its output) uses the https://buildpacks.io/docs/tools/pack/[Pack CLI] from the command line (it would work with the sample application we have been using in this guide -- no need for a Dockerfile
or any special build configuration):
pack build myorg/myapp --builder=paketobuildpacks/builder:base --path=. base: Pulling from paketobuildpacks/builder Digest: sha256:4fae5e2abab118ca9a37bf94ab42aa17fef7c306296b0364f5a0e176702ab5cb Status: Image is up to date for paketobuildpacks/builder:base base-cnb: Pulling from paketobuildpacks/run Digest: sha256:a285e73bc3697bc58c228b22938bc81e9b11700e087fd9d44da5f42f14861812 Status: Image is up to date for paketobuildpacks/run:base-cnb ===> DETECTING 7 of 18 buildpacks participating paketo-buildpacks/ca-certificates 2.3.2 paketo-buildpacks/bellsoft-liberica 8.2.0 paketo-buildpacks/maven 5.3.2 paketo-buildpacks/executable-jar 5.1.2 paketo-buildpacks/apache-tomcat 5.6.1 paketo-buildpacks/dist-zip 4.1.2 paketo-buildpacks/spring-boot 4.4.2 ===> ANALYZING Previous image with name "myorg/myapp" not found ===> RESTORING ===> BUILDING
Paketo CA Certificates Buildpack 2.3.2 https://github.com/paketo-buildpacks/ca-certificates Launch Helper: Contributing to layer Creating /layers/paketo-buildpacks_ca-certificates/helper/exec.d/ca-certificates-helper
Paketo BellSoft Liberica Buildpack 8.2.0 https://github.com/paketo-buildpacks/bellsoft-liberica Build Configuration: $BP_JVM_VERSION 11 the Java version Launch Configuration: $BPL_JVM_HEAD_ROOM 0 the headroom in memory calculation $BPL_JVM_LOADED_CLASS_COUNT 35% of classes the number of loaded classes in memory calculation $BPL_JVM_THREAD_COUNT 250 the number of threads in memory calculation $JAVA_TOOL_OPTIONS the JVM launch flags BellSoft Liberica JDK 11.0.12: Contributing to layer Downloading from https://github.com/bell-sw/Liberica/releases/download/11.0.12+7/bellsoft-jdk11.0.12+7-linux-amd64.tar.gz Verifying checksum Expanding to /layers/paketo-buildpacks_bellsoft-liberica/jdk Adding 129 container CA certificates to JVM truststore Writing env.build/JAVA_HOME.override Writing env.build/JDK_HOME.override BellSoft Liberica JRE 11.0.12: Contributing to layer Downloading from https://github.com/bell-sw/Liberica/releases/download/11.0.12+7/bellsoft-jre11.0.12+7-linux-amd64.tar.gz Verifying checksum Expanding to /layers/paketo-buildpacks_bellsoft-liberica/jre Adding 129 container CA certificates to JVM truststore Writing env.launch/BPI_APPLICATION_PATH.default Writing env.launch/BPI_JVM_CACERTS.default Writing env.launch/BPI_JVM_CLASS_COUNT.default Writing env.launch/BPI_JVM_SECURITY_PROVIDERS.default Writing env.launch/JAVA_HOME.default Writing env.launch/MALLOC_ARENA_MAX.default Launch Helper: Contributing to layer Creating /layers/paketo-buildpacks_bellsoft-liberica/helper/exec.d/active-processor-count Creating /layers/paketo-buildpacks_bellsoft-liberica/helper/exec.d/java-opts Creating /layers/paketo-buildpacks_bellsoft-liberica/helper/exec.d/link-local-dns Creating /layers/paketo-buildpacks_bellsoft-liberica/helper/exec.d/memory-calculator Creating /layers/paketo-buildpacks_bellsoft-liberica/helper/exec.d/openssl-certificate-loader Creating /layers/paketo-buildpacks_bellsoft-liberica/helper/exec.d/security-providers-configurer Creating /layers/paketo-buildpacks_bellsoft-liberica/helper/exec.d/security-providers-classpath-9 JVMKill Agent 1.16.0: Contributing to layer Downloading from https://github.com/cloudfoundry/jvmkill/releases/download/v1.16.0.RELEASE/jvmkill-1.16.0-RELEASE.so Verifying checksum Copying to /layers/paketo-buildpacks_bellsoft-liberica/jvmkill Writing env.launch/JAVA_TOOL_OPTIONS.append Writing env.launch/JAVA_TOOL_OPTIONS.delim Java Security Properties: Contributing to layer Writing env.launch/JAVA_SECURITY_PROPERTIES.default Writing env.launch/JAVA_TOOL_OPTIONS.append Writing env.launch/JAVA_TOOL_OPTIONS.delim
Paketo Maven Buildpack 5.3.2 https://github.com/paketo-buildpacks/maven Build Configuration: $BP_MAVEN_BUILD_ARGUMENTS -Dmaven.test.skip=true package the arguments to pass to Maven $BP_MAVEN_BUILT_ARTIFACT target/*.[jw]ar the built application artifact explicitly. Supersedes $BP_MAVEN_BUILT_MODULE $BP_MAVEN_BUILT_MODULE the module to find application artifact in Creating cache directory /home/cnb/.m2 Compiled Application: Contributing to layer Executing mvnw --batch-mode -Dmaven.test.skip=true package
[ ... Maven build output ... ]
[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 53.474 s [INFO] Finished at: 2021-07-23T20:10:28Z [INFO] ------------------------------------------------------------------------ Removing source code
Paketo Executable JAR Buildpack 5.1.2 https://github.com/paketo-buildpacks/executable-jar Class Path: Contributing to layer Writing env/CLASSPATH.delim Writing env/CLASSPATH.prepend Process types: executable-jar: java org.springframework.boot.loader.JarLauncher (direct) task: java org.springframework.boot.loader.JarLauncher (direct) web: java org.springframework.boot.loader.JarLauncher (direct)
====
The --builder
is a Docker image that runs the buildpack lifecycle. Typically, it would be a shared resource for all developers or all developers on a single platform. You can set the default builder on the command line (creates a file in ~/.pack
) and then omit that flag from subsequent builds.
NOTE: The paketobuildpacks/builder:base
builder also knows how to build an image from an executable JAR file, so you can build using Maven first and then point the --path
to the JAR file for the same result.
== Knative
Another new project in the container and platform space is https://cloud.google.com/knative/[Knative]. If you are not familiar with it, you can think of it as a building block for building a serverless platform. It is built on https://kubernetes.io[Kubernetes], so, ultimately, it consumes container images and turns them into applications or "services
" on the platform. One of the main features it has, though, is the ability to consume source code and build the container for you, making it more developer- and operator-friendly. https://github.com/knative/build[Knative Build] is the component that does this and is itself a flexible platform for transforming user code into containers -- you can do it in pretty much any way you like. Some templates are provided with common patterns (such as Maven and Gradle builds) and multi-stage docker builds using https://github.com/GoogleContainerTools/kaniko[Kaniko]. There is also a template that uses https://github.com/knative/build-templates/tree/master/buildpacks[Buildpacks], which is interesting for us, since buildpacks have always had good support for Spring Boot.
== Closing
This guide has presented a lot of options for building container images for Spring Boot applications. All of them are completely valid choices, and it is now up to you to decide which one you need. Your first question should be "Do I really need to build a container image?
" If the answer is "yes,
" then your choices are likely to be driven by efficiency, cacheability, and by separation of concerns. Do you want to insulate developers from needing to know too much about how container images are created? Do you want to make developers responsible for updating images when operating system and middleware vulnerabilities need to be patched? Or maybe developers need complete control over the whole process and they have all the tools and knowledge they need.