Open vdemeester opened 5 years ago
Looking into the source at https://github.com/openshift/source-to-image/blob/3bdb78d004cc1f012b0eb4cb8aca41273bb6e6cf/pkg/build/strategies/dockerfile/dockerfile.go#L415-L423
it's really only about the options --scripts-url
and --image-scripts-url
(with a fallback to /usr/libexec/s2i/
)
In fact, it would be super awesome if this detection could be done automatically as all information is already there. No need to bug the user to know where within the builder image the s2i directory is located.
The challenge is how to discover the information. The intent of the --as-dockerfile invocation is that we have no reliance on any specific tool/library that would be used to inspect the image in question, thus we can't look at the labels to determine this information. (If we assume the presence of a docker daemon and use that to inspect the image, we might as well just run s2i in the traditional mode where it builds the whole image for you).
The intent of the --as-dockerfile invocation is that we have no reliance on any specific tool/library that would be used to inspect the image in question, thus we can't look at the labels to determine this information.
I do get the "no reliance on any specific tool" but not on the library part. We could easily re-use containers/image
as a library (same as skopeo
does I think), to query the image registry for the image labels. I don't think a dependency to containers/image
would be that bad — this would allow s2i
to do more image introspection without depending on the presence of any docker daemon (there is even a dependency to docker/distribution
already so it could even be done with that one instead).
If we assume the presence of a docker daemon and use that to inspect the image, we might as well just run s2i in the traditional mode where it builds the whole image for you.
We definitely can't (or do not want) to assume the presence of a docker daemon, especially on OpenShift 4.x — but it's also true for some GKE cluster version where docker is not even there anymore (using cri-containerd
instead).
@bparees related to https://github.com/openshift/source-to-image/issues/966 (especially the latest comments), I am guessing OpenShift 4 is either affected by that issue or (most likely) does the remote inspect on its own, right ? If it's true, I feel it would be beneficial to have this part in the repository so s2i build --as-dockerfile
would act and work more closely to what OpenShift 4.x source to image strategy does.
I am guessing OpenShift 4 is either affected by that issue or (most likely) does the remote inspect on its own, right
openshift4 does its own image inspect prior to invoking the s2i bits.
If it's true, I feel it would be beneficial to have this part in the repository so s2i build --as-dockerfile would act and work more closely to what OpenShift 4.x source to image strategy does.
it could be done, yes.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten /remove-lifecycle stale
/lifecycle frozen
May I point out that all S2I related ClusterTasks that come with OpenShift Pipelines use s2i build ... --as-dockerfile
and thus are affected by this bug. I.e. silently all these ClusterTasks are dependent on the S2I machinery being located below /usr/libexec/s2i
(with the exeption of s2i-java that expects the S2I machinery below /usr/local/s2i
). They effectively ignore the image label io.openshift.s2i.scripts-url
. I was always wondering why there doesn't exist a generic S2I ClusterTask and tried to implement one myself. I got pretty far until I had to deal with the Java base images.
As long as this bug persists you (I mean the Red Hat engineers) will not be able to change the location of the S2I machinery within any S2I base image (as has been done with the Java S2I base image some time in the past).
We've hit this (again) in three customer cases over the last couple of weeks; another relevant label this time is io.openshift.s2i.assemble-user
, ,we've had subtle behavioural differences because ignoring that label and using 1001
partly works for our images.
For io.openshift.s2i.scripts-url
we can mitigate this in the Java images by moving the scripts to align with s2i's default (https://issues.redhat.com/browse/OPENJDK-2805). But I don't think we can change our default UID without risking disruption elsewhere.
Asides from fixing this in s2i at a code level (appreciate that this is not straightforward), it would be a good idea to note that the labels don't do anything in --as-dockerfile
mode in the docs e.g. in https://github.com/openshift/source-to-image/blob/master/docs/builder_image.md
It seems that
s2i build --as-dockerfile
doesn't read the builder image labels when generating theDockerfile
, especially theio.openshift.s2i.*
ones.As an example, see this gist. The image use in there
registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift
.When trying to build the generated
Dockerfile
from this builder, it fails as it is looking at it tries to execute aRUN
on/usr/libexec/s2i/assemble
, which doesn't exists in that image.Related to https://github.com/openshift/source-to-image/issues/966#issuecomment-495169765.
cc @siamaksade @rhuss