jetty / jetty.docker

jetty docker
https://eclipse.dev/jetty
Other
19 stars 18 forks source link

Update to docker-entrypoint.sh breaks startup #153

Open ddaalhuisen opened 1 year ago

ddaalhuisen commented 1 year ago

After the update to docker-entrypoint.sh, our application can no longer start, this has caused us to revert to 9.4.50 baseimage. See https://github.com/eclipse/jetty.docker/commit/03c0b9d1d494f6f8b39f1e2046e680f2d3dd2604 for details.

We use the following java_options when starting:

-Dlogback.configurationFile=/var/lib/jetty/conf/logback.xml
-Xmx2048m -Xms64m -Xmn32m
-Dfile.encoding=ISO-8859-1
-Duser.country=NL
-Duser.language=nl
-Ddatabase.type=postgres
-Dwicket.configuration=deployment
-Dlogback.statusListenerClass=ch.qos.logback.core.status.OnConsoleStatusListener
-Dorg.eclipse.jetty.server.Request.maxFormContentSize=2000000

After the update, our startup now fails with the following error:

Usage: java [options] <mainclass> [args...]
...
...
...
jetty dry run failed:

Is this intended behaviour?

lachlan-roberts commented 1 year ago

@ddaalhuisen no it was not intended to break this usage.

Can you give me an example project / Dockerfile which does not start after the change. If I run from this Dockerfile I do not get the same error

FROM jetty:9

ENV JAVA_OPTIONS="-Dlogback.configurationFile=/var/lib/jetty/conf/logback.xml -Xmx2048m -Xms64m -Xmn32m -Dfile.encoding=ISO-8859-1 -Duser.country=NL -Duser.language=nl -Ddatabase.type=postgres -Dwicket.configuration=deployment -Dlogback.statusListenerClass=ch.qos.logback.core.status.OnConsoleStatusListener -Dorg.eclipse.jetty.server.Request.maxFormContentSize=2000000"

Then I can start the server and am seeing the correct configuration:

JVM Arguments:
--------------
 -Xmx2048m
 -Xms64m
 -Xmn32m

System Properties:
------------------
 database.type = postgres (<command-line>)
 file.encoding = ISO-8859-1 (<command-line>)
 java.io.tmpdir = /tmp/jetty (<command-line>)
 logback.configurationFile = /var/lib/jetty/conf/logback.xml (<command-line>)
 logback.statusListenerClass = ch.qos.logback.core.status.OnConsoleStatusListener (<command-line>)
 org.eclipse.jetty.server.Request.maxFormContentSize = 2000000 (<command-line>)
 user.country = NL (<command-line>)
 user.language = nl (<command-line>)
 wicket.configuration = deployment (<command-line>)

Same result if I do

docker run --rm -it jetty:9 -Dlogback.configurationFile=/var/lib/jetty/conf/logback.xml -Xmx2048m -Xms64m -Xmn32m -Dfile.encoding=ISO-8859-1 -Duser.country=NL -Duser.language=nl -Ddatabase.type=postgres -Dwicket.configuration=deployment -Dlogback.statusListenerClass=ch.qos.logback.core.status.OnConsoleStatusListener -Dorg.eclipse.jetty.server.Request.maxFormContentSize=2000000 --list-config

lachlan-roberts commented 1 year ago

@ddaalhuisen I have added the branch https://github.com/eclipse/jetty.docker/tree/issue-153-jettyexec which reverts the change in question, can you test your application with an image built from this branch to confirm it fixes it for you.

To build it you can use: make eclipse-temurin/9.4/jdk17 then use the image tag in the output

Successfully tagged jetty:9.4-jdk17-eclipse-temurin
ddaalhuisen commented 1 year ago

I am working on a reproduction i can give you to demonstrate our issue. The container in question ran on jetty:9.4.51-jdk11, which we now reverted to jetty:9.4.50-jdk11

I will try to give you a buildable dockerfile which contains our issue, i will get back to you.

ddaalhuisen commented 1 year ago

I have stripped out all unnecesarry stuff from our run.sh and docker build. Building the image in the attached zip and starting it using docker-compose produces the aforementioned java issue. Our application does not yet run on Java17 so i havent been able to verify if eclipse-temurin/9.4/jdk17 resolves the issue.

testcase.zip

yosifkit commented 1 year ago

Ah, you have newlines in the JAVA_OPTIONS string. The following code will preserve any internal quoting but also ignore newlines. Putting it in an array does require switching the script to bash, but I'd say that is important anyway if a user wants to use environment variables with dots in them (https://github.com/adoptium/containers/issues/415).

eval "javaOpts=( $JAVA_OPTIONS )"
# alternatively, just overwrite `JAVA_OPTIONS` with the array instead
eval "JAVA_OPTIONS=( $JAVA_OPTIONS )"
...
# And then use it just the args array
# this could also remove the need for the eval if you do the same to "JETTY_PROPERTIES"
exec "$JAVA" "${javaOpts[@]}" "$@" "${javaOpts[@]}" "${JETTY_PROPERTIES[@]}"
lachlan-roberts commented 1 year ago

I have tested the revert of this change (#155) and it fixes this issue.

For now I have opened https://github.com/docker-library/official-images/pull/15251 to revert this change since multiple people have reported this issue.

I will add some additional tests for the various cases with spaces and newlines in parameters to make sure we do not break them again. After that we consider ways at removing the jetty.exec file if deemed necessary.

lachlan-roberts commented 1 year ago

@ddaalhuisen as a workaround if you add

JAVA_OPTIONS=`echo $JAVA_OPTIONS`

to your run.sh file it seems to fix your issue.

lachlan-roberts commented 1 year ago

@yosifkit as far as I understand the alpine images don't support bash so that's why we are using sh for the entrypoint

damondaalhuisen commented 1 year ago

@ddaalhuisen as a workaround if you add

JAVA_OPTIONS=`echo $JAVA_OPTIONS`

to your run.sh file it seems to fix your issue.

Thank you, this has fixed it for now!

jnehlmeier commented 1 year ago

@lachlan-roberts The previous solution using jetty.exec and . jetty.exec did allow us to use env variables inside jetty modules.

For example my-jvm.mod having

[exec]
-XX:HeapDumpPath=${jetty.base}/logs/${JETTY_HOSTNAME}/${OOM_HEAP_DUMP_NAME}

and a my-docker-entrypoint.sh with

#!/bin/bash

JETTY_HOSTNAME=$(hostname)
OOM_HEAP_DUMP_NAME=oom-dump-$(date +%Y-%m-%d-%H-%M-%S).hrof

export JETTY_HOSTNAME
export OOM_HEAP_DUMP_NAME

# executes entrypoint provided by upstream jetty image and passes all parameters
exec /docker-entrypoint.sh "$@"

This did work fine but now in jetty.start we end up with

'-XX:HeapDumpPath=/var/lib/jetty/logs/${INSTANCE_NAME}/${OOM_HEAP_DUMP_NAME}'

So variables are not been expanded anymore.

I then tried to use jetty properties and fill them using env variable but that didn't work either or at least I couldn't figure it out. I changed the module to something like

[exec]
-XX:HeapDumpPath=${jetty.base}/logs/${jetty.hostname}/${jetty.oom.heapdump.name}

[ini]
jetty.hostname=unnamed
jetty.oom.heapdump.name=unnamed

and in my Docker file I then used

ENV JETTY_PROPERTIES="jetty.hostname=${JETTY_HOSTNAME} jetty.oom.heapdump.name=${OOM_HEAP_DUMP_NAME}"

However this resulted in jetty properties being expanded to empty strings and in jetty.start I end up with

'-XX:HeapDumpPath=/var/lib/jetty/logs//'
joakime commented 1 year ago

The mapping of the environment variables to java properties is the way to go.

Java has a long and bad history with using system environment variables (eg: the System.getenv(String) has been deprecated, noop impl, restored, undeprecated, broken, deprecated again, voted to restore, undeprecated, javadoc rewritten, warnings added, removed, added back in a different way, reduced severity in language, etc), so for the sake of your sanity, you should view system environment variables as entirely within the scope of shell scripts only. (the idea of using the environment variables in Jetty start INI's is out, don't bother attempting that)

Using JETTY_PROPERTIES means those will only apply during the --dry-run call, and not actually show up in the output of --dry-run.

Use normal System properties in JAVA_OPTIONS to do the mapping.

Eg:

JAVA_OPTIONS="-Djetty.hostname=${JETTY_HOSTNAME}"

Referencing a system property in Jetty's start language is still ${jetty.hostname}, and the XML can choose to use either <Property> or <SystemProperty> to reference that named property.

jnehlmeier commented 1 year ago

so for the sake of your sanity, you should view system environment variables as entirely within the scope of shell scripts only.

I guess because of jetty.exec and its execution jetty has never seen the env variables. They were expanded by shell and the expanded final jetty command ended up in jetty.start.

But I can see the argument of that being an implementation detail of the docker image.

Using JETTY_PROPERTIES means those will only apply during the --dry-run call, and not actually show up in the output of --dry-run.

Hmmm not sure what you mean. If I have

ENV JETTY_PROPERTIES="jetty.deploy.scanInterval=0 jetty.hostname=${JETTY_HOSTNAME}"

in my dockerfile then it will produce a jetty.start containing

... org.eclipse.jetty.xml.XmlConfiguration jetty.deploy.scanInterval=0 jetty.hostname ...

So I guess the start jar will try to lookup a jetty property or a system property named JETTY_HOSTNAME and didn't find it. Thus the replacement is empty and only jetty.hostname is copied to the final jetty command. Is that what you meant?

Before the above change I could even have a second dockerfile for a concrete app that uses my base jetty dockerfile and modifies JETTY_PROPERTIES again, e.g.

FROM my-jetty:latest
ENV JETTY_PROPERTIES="$JETTY_PROPERTIES jetty.threadPool.namePrefix=myapp"

This did produce a jetty.start containing jetty.deploy.scanInterval=0 jetty.threadPool.namePrefix=myapp But I haven't tried how it behaves now because my base jetty image did already fail to start.

tianon commented 10 months ago

@yosifkit as far as I understand the alpine images don't support bash so that's why we are using sh for the entrypoint

It's just an apk add --no-cache bash away :eyes: