Closed vtereso closed 5 years ago
Hi @vtereso, thanks for taking the time to open the issue. While we are having a look at it, would you mind adding the requested data to the issue template (Output of ...
etc.)? It could help understanding and tracking down the issue.
My suspicion is that Kaniko sets $HOME.
Hi @vtereso, thanks for taking the time to open the issue. While we are having a look at it, would you mind adding the requested data to the issue template (
Output of ...
etc.)? It could help understanding and tracking down the issue.
These outputs are to be filled using the responses from the quay.io/openshift-pipeline/buildah
container?
My suspicion is that Kaniko sets $HOME.
I don't understand what segment of the command fails to have $HOME
because the alpine, kaniko, buildah, etc. all have $HOME
specified so my assumption was that some layers that buildah creates don't have these env so it fails.
These outputs are to be filled using the responses from the
quay.io/openshift-pipeline/buildah
container?
Ideally both, the one used to build the image and the version inside the image.
I don't understand what segment of the command fails to have $HOME because the alpine, kaniko, buildah, etc. all have $HOME specified so my assumption was that some layers that buildah creates don't have these env so it fails.
I concur. Even setting via --env does not change anything.
I could trim it down a bit more:
quay.io/openshift-pipeline/buildah
and all other images (as previously mentioned) have the $HOME set. When running buildah inside the container, $HOME is not set anymore. That can easily be reproduced by using a simple Dockerfile such as:
FROM golang
RUN echo "HOME=$HOME"
EDIT: this Dockerfile must be built inside the container.
@TomSweeneyRedHat @rhatdan @nalind, any suspicion?
Odd side note:
# #inside the container
# buildah from alpine
# buildah run alpine-working-container echo $HOME
/root
EDIT: That certainly came from too early substitution. The working-container's $HOME is empty.
@vtereso I'm still a little confused. Can you add the exact Bulidah commands that you used to run into this error for your original log output please? I don't expect buildah run
to work for you if for no other reason the ENTRYPOINT is ignored by the bulidah run
command. However, Podman run should work.
FWIW, I did after cloning your repo
# mkdir /var/lib/mycontainer
# cd ~/workspace
# buildah bud -t tom -f ~/Dockerfile.badhome .
# podman run -v /var/lib/mycontainer:/var/lib/containers:Z --device /dev/fuse:rw tom
That failed like yours did. As @vrothberg noted, the buildah bud
command seems to not be able to find the HOME envar when it is being run inside of a container rather than a host. I'm not sure why that is.
For a work around, I edited your workspace/git-source/Dockerfile
adding the below ENV line:
FROM golang as builder
ENV HOME=/root
With that in place, things worked.
I could trim it down a bit more:
quay.io/openshift-pipeline/buildah
and all other images (as previously mentioned) have the $HOME set. When running buildah inside the container, $HOME is not set anymore. That can easily be reproduced by using a simple Dockerfile such as:FROM golang RUN echo "HOME=$HOME"
EDIT: this Dockerfile must be built inside the container.
@TomSweeneyRedHat @rhatdan @nalind, any suspicion?
This is the problem that I am/was running into. I see that adding an ENV within the Dockerfile seems to fix things, and although that isn't optimal, it gets me over this hurdle.
EDIT: Seems to also fail for me
Sending build context to Docker daemon 180.7kB
Step 1/11 : FROM quay.io/openshift-pipeline/buildah
---> 90833879ccc1
Step 2/11 : ARG GIT_SOURCE="https://github.com/a-roberts/knative-helloworld"
---> Using cache
---> 124d7ac4b4ae
Step 3/11 : ENV HOME="/root"
---> Running in d730ef101246
Removing intermediate container d730ef101246
---> 77a358eba2e4
Step 4/11 : ENV TLS_VERIFY="true"
---> Running in 7fc3f8bec5fc
Removing intermediate container 7fc3f8bec5fc
---> 2ce064583923
Step 5/11 : ENV CONTEXT_PATH="/workspace/git-source"
---> Running in 8cd155235722
Removing intermediate container 8cd155235722
---> 27a742500cf0
Step 6/11 : ENV DOCKERFILE_PATH="/workspace/git-source/Dockerfile"
---> Running in 3fba4913445e
Removing intermediate container 3fba4913445e
---> 55821edd21af
Step 7/11 : ENV TAG="testing"
---> Running in 83394b746b2c
Removing intermediate container 83394b746b2c
---> d0cb5aab27c3
Step 8/11 : WORKDIR ${CONTEXT_PATH}
---> Running in 1a914cf30f61
Removing intermediate container 1a914cf30f61
---> a0ec781e4e68
Step 9/11 : COPY git-source/ .
---> fc3497353168
Step 10/11 : VOLUME "/var/lib/containers"
---> Running in 8f41e4d91768
Removing intermediate container 8f41e4d91768
---> 9980307f34f0
Step 11/11 : ENTRYPOINT ["/bin/sh", "-c", "buildah build-using-dockerfile --tls-verify=${TLS_VERIFY} --layers -f ${DOCKERFILE_PATH} -t ${TAG} -- ${CONTEXT_PATH}"]
---> Running in aba26ecb4ccf
Removing intermediate container aba26ecb4ccf
---> 57ba001db6f6
Successfully built 57ba001db6f6
Successfully tagged buildah-l:latest
vincents-mbp:kaniko_debug Vincent.DeSousa.Tereso@ibm.com$ docker run --privileged buildah-l
STEP 1: FROM golang AS builder
Getting image source signatures
Copying blob sha256:c5e155d5a1d130a7f8a3e24cee0d9e1349bff13f90ec6a941478e558fde53c14
Copying blob sha256:221d80d00ae9675aad24913aacbadfac1ce8b7084f9765a6c0813486082c5c69
Copying blob sha256:4250b3117dca5e14edc32ebf1366cd54e4cda91f17610b76c504a86917ff8b95
Copying blob sha256:3b7ca19181b24b87e24423c01b490633bc1e47d2fcdc1987bf2e37949d6789b5
Copying blob sha256:aa24759e848fee3ef333af3dd3ae951eb042e8cd20b5fc0e28a2f3c52cc7e25f
Copying blob sha256:927e9eaeed1922f626e8a34f9a21b6029f36d4112cbb04dbdbd9065e107a59cb
Copying blob sha256:66293f4dacbd8884954f2c9332298ace627830801c3b484ba89ca424c619f374
Copying config sha256:7ced090ee82ee77beabd76ad1ba3b167acd8609b0b10c4ef46cee3ddf6e6fa5f
Writing manifest to image destination
Storing signatures
STEP 2: WORKDIR /go/src/github.com/knative/docs/helloworld
STEP 3: FROM 6c430deef52c8cbfefa2f0d866083b4ea9c4e8af3970c5bff498ac7b2f47cf65 AS builder
--> 6c430deef52c8cbfefa2f0d866083b4ea9c4e8af3970c5bff498ac7b2f47cf65
STEP 4: COPY . .
STEP 5: FROM 286dc25c758c6418a7e48b0d0c2dfdf8089aa0d6d49812b7d98930f8108a420d AS builder
--> 286dc25c758c6418a7e48b0d0c2dfdf8089aa0d6d49812b7d98930f8108a420d
STEP 6: RUN CGO_ENABLED=0 GOOS=linux go build -v -o helloworld
build cache is required, but could not be located: GOCACHE is not defined and neither $XDG_CACHE_HOME nor $HOME are defined
I took down my environment because I was using the same RHEL VM to run Minishift/OKD (it can only handle so much). Give me a bit and I will update the thread to properly reply.
Thanks @vtereso .
Please do include any buildah/podman commands that you use along with the output as you've been doing. It's hard to guess what's what otherwise. For your latest failure, I'm not seeing a ENV HOME in the output, although ideally, you shouldn't need to specify that, it looks like it's required at the moment.
No thoughts at the moment, I'm just playing and looking at different scenarios in hopes of narrowing it down. The login process should be setting that envvar, so perhaps it's not getting invoked properly? But if so, I'd expect the same issue for the initial container...
runc
sets $HOME
if the configuration it receives doesn't include a value, but we currently don't when we're using chroot
isolation. Fixing this probably involves extending pkg/chrootuser
to look up home directory locations, and if the spec doesn't include a value for HOME
, having chroot
set it to the value it finds, or /
if no value is found.
Per usual, @nalind is spot on. I just tried:
# buildah bud --isolation=oci -t tom -f ~/Dockerfile.badhome .
# podman run -v /var/lib/mycontainer:/var/lib/containers:Z --device /dev/fuse:rw tom
and it seemed to work for me. @vtereso can you try adding --isolation=oci
to your build command and see how things go for you?
@nalind I'll take a look at changing chroot tomorrow, holler if I shouldn't.
Right, @nalind nailed it.
sh-4.4# buildah from golang
golang-working-container-3
sh-4.4# buildah run golang-working-container-3 sh
# echo $GOPATH
/go
It's really just $HOME, all other variables in the spec are properly set.
The op is using chroot isolation due to being inside a container already.
On Wed, May 15, 2019, 02:50 Valentin Rothberg notifications@github.com wrote:
Right, @nalind https://github.com/nalind nailed it.
sh-4.4# buildah from golang golang-working-container-3 sh-4.4# buildah run golang-working-container-3 sh
echo $GOPATH
/go
It's really just $HOME, all other variables in the spec are properly set.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/containers/buildah/issues/1592?email_source=notifications&email_token=AAAQL2OIPNYZGNVH5UUV7U3PVOXDFA5CNFSM4HMTGEC2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVNV5PQ#issuecomment-492527294, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAQL2JFZ4F5QFWCBNCBYCLPVOXDFANCNFSM4HMTGECQ .
@TomSweeneyRedHat In my last response I provided the $HOME
variable as seen within Step 3, but that did not resolve the issue related to the RUN ... go build
layer. Setting the isolation flag (--isolation=oci
) did fix things 😄. Was almost hoping there was an error because I fiddled with most all the buildah bud
flags, but seemed to have missed that one 😫 . I may have tried it, but perhaps not that setting 🤔 @vbatts Can you explain why isolation is defaulted to chroot
rather than oci
? https://github.com/containers/buildah/blob/master/docs/buildah-bud.md#options <- This seems to* specify that oci
would be the default and would make sense since it is an image, which would run on Kube in most instances?
the image you're running, is running buildah inside the container. https://github.com/containers/buildah/blob/master/buildahimage/stable/Dockerfile#L27
@vbatts I guess my question is more about the differences in isolation levels and what that entails since I am not familiarized. IIUC, the buildah
image is by definition always a container (and buildah commands create containers one level further) and if isolation defaults to chroot
, at least for this use case, it will error?
I'll let @vbatts or @nalind talk about the differences in levels as I'm not very well versed. However, I'm working on putting together a fix so that $HOME will be defined when using chrooot isolation and that should hopefully cure the problem.
Description
Using the following Dockerfile, I am unable to get successfully run the buildah bud command:
The resulting log is:
I have gotten Kaniko to build this repository. Curious if this issue is regarding some flags I have no set properly to the buildah command. I have tried setting the
$BUILDAH_ISOLATION
env and the corresponding--isolation
flag (saw on another issue that could do something). I have run this on my 3.11 OKD cluster and locally on macOS (although still through the quay images) and experience the same error.Steps to reproduce the issue:
--privileged
Describe the results you received: Error during go build regarding ENV
$GOCACHE
/$HOME
not being set (which they are?)Describe the results you expected: Clean build
Output of
rpm -q buildah
orapt list buildah
:Output of
buildah version
:Output of
podman version
if reporting apodman build
issue:*Output of `cat /etc/release`:**
Output of
uname -a
:Output of
cat /etc/containers/storage.conf
: