Closed kevin-bates closed 1 year ago
Hi @luong-komorebi - thank you for taking a look at this and the great insight regarding tini
and HEALTHCHECK
!
I first noticed the latter when I implemented the PR to introduce a base image (docker-stacks-foundation
) which splits the base notebook image into a further base image from which the notebook base image now derives. Although I didn't update the kernel images in EG, this docker-stacks-foundation
image serves as the base image for the kernel-based images that we either produce or document in remote_provisioners.
Since our kernel images in EG derive from "server-based" images, you bring up a good point about disabling the HEALTHCHECK
in them and I've gone ahead and created issue #1218. I suspect the HEALTHCHECK
is still valid for the EG image itself, but we should probably visit that topic when resolving that issue.
This topic also reminds me to introduce a new label named remote-provisioners
which indicates that the EG issue or PR would also apply to remote_provisioners (and likely have an analogous enterprise-gateway
label in remote_provisioners).
I think we can go ahead with this PR, assuming it checks out with others, but let's not cut the next (patch?) release until we've added HEALTHCHECK NONE
to our kernel images. Thank you for raising this.
Here's output showing these images now start after these changes:
$ docker run -it --rm elyra/kernel-spark-py:dev whoami
++ id -u
+ myuid=1000
++ id -g
+ mygid=100
+ set +e
++ getent passwd 1000
+ uidentry=jovyan:x:1000:100::/home/jovyan:/bin/bash
+ set -e
+ '[' -z jovyan:x:1000:100::/home/jovyan:/bin/bash ']'
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sort -t_ -k4 -n
+ sed 's/[^=]*=\(.*\)/\1/g'
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z ']'
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z ']'
+ '[' -z x ']'
+ SPARK_CLASSPATH='/opt/spark/conf::/opt/spark/jars/*'
+ case "$1" in
+ echo 'Non-spark-on-k8s command provided, proceeding in pass-through mode...'
Non-spark-on-k8s command provided, proceeding in pass-through mode...
+ CMD=("$@")
+ exec /usr/bin/tini -g -- whoami
jovyan
$ docker run -it --rm elyra/kernel-spark-r:dev whoami
++ id -u
+ myuid=1000
++ id -g
+ mygid=100
+ set +e
++ getent passwd 1000
+ uidentry=jovyan:x:1000:100::/home/jovyan:/bin/bash
+ set -e
+ '[' -z jovyan:x:1000:100::/home/jovyan:/bin/bash ']'
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sort -t_ -k4 -n
+ sed 's/[^=]*=\(.*\)/\1/g'
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z ']'
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z ']'
+ '[' -z x ']'
+ SPARK_CLASSPATH='/opt/spark/conf::/opt/spark/jars/*'
+ case "$1" in
+ echo 'Non-spark-on-k8s command provided, proceeding in pass-through mode...'
Non-spark-on-k8s command provided, proceeding in pass-through mode...
+ CMD=("$@")
+ exec /usr/bin/tini -g -- whoami
jovyan
Apparently, the "last minute" change made in #1207 for 3.1.0 came back to bite me! Those base images do not include
/opt/conda/bin/tini
which we preferred over the default/usr/bin/tini
. As a result, the adjustments we make in our spark images that derive from the images built from those docker-stacks images were DOA. (What could possibly go wrong, right? :smile:)This pull request removes the use of
/opt/conda/bin/tini
and merely performs the previous adjustment of-s
to-g
relative to/usr/bin/tini
.Fixes: #1216