Closed magick93 closed 7 years ago
I wonder if this is related to using the old style kubernetes-pipeline-plugin approach vs the new kubernetes-plugin style. I.e. you could try the new approach, assuming that jhipster/jhipster
is your builder image, something like..
@Library('github.com/fabric8io/fabric8-pipeline-library@master')
def dummy
dockerTemplate{
mavenNode(mavenImage: 'jhipster/jhipster') {
container(name: 'maven') {
checkout scm
stage 'Canary Release'
mavenCanaryRelease{
version = canaryVersion
}
stage 'Integration Test'
mavenIntegrationTest{
environment = 'Testing'
failIfNoTests = localFailIfNoTests
itestPattern = localItestPattern
}
stage 'Rolling Upgrade Production'
def rc = readFile 'target/classes/kubernetes.json'
kubernetesApply(file: rc, environment: envProd)
}
}
}
I get the same error with the above jenkinsfile.
Can you try one of the fabric8 create new projects apps please? I.e. in the fabric8 console, create a new project, select say the Microservices
project and select a the Canary Release and stage pipeline to see if that goes through successfully.
We are unable to login to conole due to https://github.com/fabric8io/fabric8-console/issues/257
I am wondering if this is related to the way arbitrary user ids are handled in openshift.
@rawlingsj - now that we can access the f8 console, we created a Spring Boot Web MVC QuickStart example app. This did build fine, the first time. Subsequent builds fail with:
[31mTimed out waiting for pods/services![m
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 315.624 sec <<< FAILURE! - in org.example.KubernetesIntegrationKT
org.example.KubernetesIntegrationKT Time elapsed: 315.623 sec <<< ERROR!
java.lang.RuntimeException: java.lang.IllegalStateException: Failed to apply kubernetes configuration.
Nonetheless, this is, hopefully helpful.
I should add, this test app was created in the same OpenShift project, and used the same Jenkins as our app that is having problems.
@rawlingsj - when I tried the Jenkinsfile in the Spring Boot Web MVC QuickStart example app it also fails with:
[f8test] Running shell script
Executing shell script inside container [maven] of pod [kubernetes-87f59e31eac64eeabef929673f2019e6-29cc6924696c]
Executing command: sh -c echo $$ > '/home/jenkins/workspace/f8test@tmp/durable-38812024/pid'; jsc=durable-8d9b47991d65e79b5ef07257f76d9da4; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/f8test@tmp/durable-38812024/script.sh' > '/home/jenkins/workspace/f8test@tmp/durable-38812024/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/f8test@tmp/durable-38812024/jenkins-result.txt'
$ cd /home/jenkins/workspace/f8test
sh -c echo $$ > '/home/jenkins/workspace/f8test@tmp/durable-38812024/pid'; jsc=durable-8d9b47991d65e79b5ef07257f76d9da4; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/f8test@tmp/durable-38812024/script.sh' > '/home/jenkins/workspace/f8test@tmp/durable-38812024/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/f8test@tmp/durable-38812024/jenkins-result.txt'
exit
$ /bin/sh: 2: cannot create /home/jenkins/workspace/f8test@tmp/durable-38812024/pid: Permission denied
/bin/sh: 2: cannot create /home/jenkins/workspace/f8test@tmp/durable-38812024/jenkins-log.txt: Permission denied
/bin/sh: 2: cannot create /home/jenkins/workspace/f8test@tmp/durable-38812024/jenkins-result.txt: Permission denied
$ command terminated with non-zero exit code: Error executing in Docker Container: 2[Pipeline] }
@iocanel - can you help to clarify - we have two jenkinsfiles in this issue, neither works. Which is the new and which is the legacy? Or are the just different syntaxs for same library? Are there alternatives that do what the jenkinsfiles describe?
Currently, regardless of how you express things under the hood everything goes through the kubernetes-plugin.
The syntax that comes with the plugin is the following:
podTemplate(...) {
}
On top of that the Fabric8 Pipeline Library adds some sugar, to make it easier to compose podTemplate
with multiple differrent containers. For example:
dockerTemplate {
mavenTemplate {
}
}
Even the old/legacy Kubernetes Pipeline Plugin internally uses the same concepts (e.g. podTemplate).
kubernetes.pod('mypod').image('myimage').inside {
}
In any case the problem you are having, doesn't have to do with the syntax of the plugin, but something else. (not sure what but my guess is that its due to Openshift's arbitrary user ids).
Thanks
not sure what but my guess is that its due to Openshift's arbitrary user ids
How can we investigate this hypothesis further?
My hypothesis is wrong. It's not due to arbitrary user ids
.
Is there anything we can do to debug the issue?
This maybe related - https://issues.jenkins-ci.org/browse/JENKINS-37069
Ah interesting - that had jogged my memory and we have a doc on the SELinux workaround suggested in that link.
@magick93: can you try removing
, workingDir: '/home/jenkins')
and tell me if it changes anything?
...tell me if it changes anything?
Unfortunately no change - same error.
@rawlingsj - the link you gave has a reference to another page:
An example security context constraint that configures myserviceacccount in the default namespace can be found here
But the here is a 404. Do you know where this should point to?
On https://github.com/jenkinsci/kubernetes-pipeline-plugin/blob/master/kubernetes-steps/readme.md#technical-notes, it says:
Under the hood the plugin is using hostPath mounts. This requires two things
A service account associated with a security context constraint that allows hostPath mounts. A host capable of hostPath mounts
In the Jenkins SCC we have:
Settings:
Allow Privileged: true
Default Add Capabilities: <none>
Required Drop Capabilities: <none>
Allowed Capabilities: <none>
Allowed Volume Types: *
Allow Host Network: true
Allow Host Ports: true
Allow Host PID: false
Allow Host IPC: false
I did not see any setting specifically mentioning 'hostPath' in the SCC output.
This looks related - https://issues.jenkins-ci.org/browse/JENKINS-37069
The docker pipeline plugin and the kubernetes plugin use the same principal: share workspace via volumes, so that you can exec shell commands in container
.
What is quite different in your case, is that sharing is done via pod volumes, so there shouldn't be any fancy permission issue. The fabric8/jenkins-docker
works nicely in Openshift (no issues with arbitrary user ids).
In some tests I run locally, I managed to get your Jenkinsfile (a trimmed down version of it) working, by completely removing the workingDir
directive.
removing the
workingDir
directive
My jenkinsfile doesnt specify a workingDir and I dont have jenkins job setting for a workingDir.
How do you remove this directive?
One of the variations you sent me did specify a workingDir (I am referring to the updated one).
Can anyone suggest any workarounds? We have been trying to deploy our apps for a long time and would so appreciate finding anything that works.
You shouldn't need it AFAIK but have you tried running the Jenkins master as a privileged pod? I think I suggested that along the way but I can't see it on this thread so perhaps it's not been attempted.
So oc edit dc jenkins
and add the privileged security context like like this example, note the last two lines..
image: fabric8/jenkins-docker:2.2.311
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /blue/
port: 8080
initialDelaySeconds: 120
timeoutSeconds: 10
name: jenkins
securityContext:
privileged: true
I have tried that, but ran into these issues: https://github.com/openshift/origin-web-console/issues/1221 https://github.com/openshift/origin/issues/12819
It looks to me from those two issue that you're adding the security context to the wrong part of the YAML. Have you tried adding it to the container as in my example above? If it still fails can you copy the DeploymentConfig here so I can see where you're adding the privileged privileged?
Ah yes, your'e right. I was adding it to the wrong section.
I tried again, this time adding it to the container section. It saved correctly. But, upon running the jenkins job again, the same error occurs.
Is there anything I can do to provide more helpful information, eg, is there some logging or bash code that I can add to help understand this issue?
@magick93: I am starting to believe that permission issues are because of your image.
Can you please crate a simple pipeline project in jenkins and use the following script: https://gist.github.com/iocanel/500b5dfc4ab2b65306f52f765fff74d3 and tell me if this works (runs without those permission errors).
If it works, can you then replace it with your actual image so that we can crosscheck?
Here, it works fine with maven
but doesn't work with jhipster/jhipster
[Pipeline] sh
[copy-of-sw] Running shell script
Executing shell script inside container [maven] of pod [kubernetes-888f3739fb764fdbb3b1cfe724692645-68d84cb387b4]
Executing command: sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-faf13645/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/copy-of-sw@tmp/durable-faf13645/script.sh' > '/home/jenkins/workspace/copy-of-sw@tmp/durable-faf13645/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/copy-of-sw@tmp/durable-faf13645/jenkins-result.txt'
# cd /home/jenkins/workspace/copy-of-sw
sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-faf13645/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/copy-of-sw@tmp/durable-faf13645/script.sh' > '/home/jenkins/workspace/copy-of-sw@tmp/durable-faf13645/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/copy-of-sw@tmp/durable-faf13645/jenkins-result.txt'
exit
# # + pwd
/home/jenkins/workspace/copy-of-sw
[Pipeline] sh
[copy-of-sw] Running shell script
Executing shell script inside container [maven] of pod [kubernetes-888f3739fb764fdbb3b1cfe724692645-68d84cb387b4]
Executing command: sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-8e0ef96d/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/copy-of-sw@tmp/durable-8e0ef96d/script.sh' > '/home/jenkins/workspace/copy-of-sw@tmp/durable-8e0ef96d/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/copy-of-sw@tmp/durable-8e0ef96d/jenkins-result.txt'
# cd /home/jenkins/workspace/copy-of-sw
sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-8e0ef96d/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/copy-of-sw@tmp/durable-8e0ef96d/script.sh' > '/home/jenkins/workspace/copy-of-sw@tmp/durable-8e0ef96d/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/copy-of-sw@tmp/durable-8e0ef96d/jenkins-result.txt'
exit
# # + whomi
/home/jenkins/workspace/copy-of-sw@tmp/durable-8e0ef96d/script.sh: 2: /home/jenkins/workspace/copy-of-sw@tmp/durable-8e0ef96d/script.sh: whomi: not found
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
So typo aside (whomi should be whoami) you don't have the permissions issue, right?
Can u fix the typo and try again?
Yes, correct, no more permission issues:
[Pipeline] sh
[copy-of-sw] Running shell script
Executing shell script inside container [maven] of pod [kubernetes-19810d3295bf440788adde9199a30211-6af47a58ecd6]
Executing command: sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-0914b9b6/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/copy-of-sw@tmp/durable-0914b9b6/script.sh' > '/home/jenkins/workspace/copy-of-sw@tmp/durable-0914b9b6/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/copy-of-sw@tmp/durable-0914b9b6/jenkins-result.txt'
# cd /home/jenkins/workspace/copy-of-sw
sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-0914b9b6/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/copy-of-sw@tmp/durable-0914b9b6/script.sh' > '/home/jenkins/workspace/copy-of-sw@tmp/durable-0914b9b6/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/copy-of-sw@tmp/durable-0914b9b6/jenkins-result.txt'
exit
# # + pwd
/home/jenkins/workspace/copy-of-sw
[Pipeline] sh
[copy-of-sw] Running shell script
Executing shell script inside container [maven] of pod [kubernetes-19810d3295bf440788adde9199a30211-6af47a58ecd6]
Executing command: sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-409cc4f2/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/copy-of-sw@tmp/durable-409cc4f2/script.sh' > '/home/jenkins/workspace/copy-of-sw@tmp/durable-409cc4f2/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/copy-of-sw@tmp/durable-409cc4f2/jenkins-result.txt'
# cd /home/jenkins/workspace/copy-of-sw
sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-409cc4f2/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/copy-of-sw@tmp/durable-409cc4f2/script.sh' > '/home/jenkins/workspace/copy-of-sw@tmp/durable-409cc4f2/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/copy-of-sw@tmp/durable-409cc4f2/jenkins-result.txt'
exit
# # + whoami
root
[Pipeline] sh
[copy-of-sw] Running shell script
Executing shell script inside container [maven] of pod [kubernetes-19810d3295bf440788adde9199a30211-6af47a58ecd6]
Executing command: sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-722305a5/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/copy-of-sw@tmp/durable-722305a5/script.sh' > '/home/jenkins/workspace/copy-of-sw@tmp/durable-722305a5/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/copy-of-sw@tmp/durable-722305a5/jenkins-result.txt'
# cd /home/jenkins/workspace/copy-of-sw
sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-722305a5/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/copy-of-sw@tmp/durable-722305a5/script.sh' > '/home/jenkins/workspace/copy-of-sw@tmp/durable-722305a5/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/copy-of-sw@tmp/durable-722305a5/jenkins-result.txt'
exit
# # + echo hello world
hello world
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS
So, the problem is with the jhipster/jhipster
image.
Not sure exactly why it cases a problem (will have to investigate further) , but I think its safe to assume that its due to the image.
I would just use the maven image. After all the jhipster image is only used for generating that initial project and is not required for the build (from a quick glance at the docs).
If for any reason, you build does require a custom docker image, then you better create one of your own. (if I had to go down this road I would try to modifiy the original one and change things that could affect permissions etc. For example: not use a custom user).
Ok, but how do we package our .war file is deployed in the jhipster image?
And I'm not sure, but I believe jhipster image is needed for building as it uses both maven and node.
I do know it used to work, that we used to be able to do this.
We also do have a custom jhipster image. Currently we are blocked from using this due to https://github.com/openshift/origin/issues/12863
However, is it possible to build from a dockerfile, push to openshift registry, and then deploy using this image, from jenkins groovy?
This is what used to work, approx a year ago.
FROM node
RUN apt-get update && apt-get install -y --no-install-recommends \
bzip2 \
unzip \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN echo 'deb http://httpredir.debian.org/debian jessie-backports main' > /etc/apt/sources.list.d/jessie-backports.list
# Default to UTF-8 file.encoding
ENV LANG C.UTF-8
# add a simple script that can auto-detect the appropriate JAVA_HOME value
# based on whether the JDK or only the JRE is installed
RUN { \
echo '#!/bin/sh'; \
echo 'set -e'; \
echo; \
echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
} > /usr/local/bin/docker-java-home \
&& chmod +x /usr/local/bin/docker-java-home
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV JAVA_VERSION 8u72
ENV JAVA_DEBIAN_VERSION 8u72-b15-1~bpo8+1
# see https://bugs.debian.org/775775
# and https://github.com/docker-library/java/issues/19#issuecomment-70546872
ENV CA_CERTIFICATES_JAVA_VERSION 20140324
RUN set -x \
&& apt-get update \
&& apt-get install -y \
openjdk-8-jdk="$JAVA_DEBIAN_VERSION" \
ca-certificates-java="$CA_CERTIFICATES_JAVA_VERSION" \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get install -y git \
&& apt-get clean \
&& [ "$JAVA_HOME" = "$(docker-java-home)" ]
# see CA_CERTIFICATES_JAVA_VERSION notes above
RUN /var/lib/dpkg/info/ca-certificates-java.postinst configure
ENV MAVEN_VERSION 3.3.3
RUN mkdir -p /usr/share/maven \
&& curl -fsSL http://apache.osuosl.org/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz \
| tar -xzC /usr/share/maven --strip-components=1 \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn \
&& npm set progress=false \
&& npm install --global --progress=false gulp bower \
&& echo '{ "allow_root": true }' > /root/.bowerrc \
&& mkdir /workdir
ENV MAVEN_HOME /usr/share/maven
EXPOSE 8080
WORKDIR /workdir
CMD npm install && bower install && $MAVEN_HOME/bin/mvn -Pprod
COPY . /workdir
#!/usr/bin/groovy
def failIfNoTests = ""
try {
failIfNoTests = ITEST_FAIL_IF_NO_TEST
} catch (Throwable e) {
failIfNoTests = "false"
}
def itestPattern = ""
try {
itestPattern = ITEST_PATTERN
} catch (Throwable e) {
itestPattern = "*KT"
}
def versionPrefix = ""
try {
versionPrefix = VERSION_PREFIX
} catch (Throwable e) {
versionPrefix = "1.0"
}
def canaryVersion = "${versionPrefix}.${env.BUILD_NUMBER}"
def utils = new io.fabric8.Utils()
node {
def envProd = 'shiftwork-production'
checkout scm
kubernetes.pod('buildpod').withImage('<ip address>:80/shiftwork/jhipster-build')
.withPrivileged(true)
.withHostPathMount('/var/run/docker.sock','/var/run/docker.sock')
.withEnvVar('DOCKER_CONFIG','/home/jenkins/.docker/')
.withSecret('jenkins-docker-cfg','/home/jenkins/.docker')
.withSecret('jenkins-maven-settings','/root/.m2')
.withServiceAccount('jenkins')
.inside {
stage 'Canary Release'
mavenCanaryRelease{
version = canaryVersion
}
stage 'Integration Test'
mavenIntegrationTest{
environment = 'Testing'
failIfNoTests = localFailIfNoTests
itestPattern = localItestPattern
}
stage 'Rolling Upgrade Production'
def rc = readFile 'target/classes/kubernetes.json'
kubernetesApply(file: rc, environment: envProd)
}
}
Are you able to push a test project to GitHub that contains an example of the type of project you're trying to build and deploy? If we have a test project and are able to recreate your issue I'm confident we can find a solution.
Thanks James - I've pushed to https://github.com/magick93/shiftwork
I found that there is a https://github.com/jhipster/jhipster-ci-stack but even with this I still get the same errors as with previous attempts.
@magick93 thanks for the project link, I'll try and take a look tonight. I did manage to recreate the issue with that jhipster image. I'll try and come up with something soon.
@rawlingsj - You cannot understand how much I appreciate your help!
That's ok I understand what it's like.
FWIW I've got it running on minikube once I realised it was nodejs5 and deployed PostgreSQL too, I've also upgraded the app to use the new fabric8 maven plugin. I recreated your issue btw, next I'll add a custom jhipster-builder image to use in your Jenkinsfile and you should be good to go. I've had to stop working at the moment but will try and push what I have before calling it a night. But I think we're pretty much there.
Ok just submitted a PR - I've not been able to test on openshift yet I'm afraid but I'm out of time this weekend. It all seems to be working on minikube so hopefully it will be fine for you. Note that the Jenkinsfile is pointing to my library at the moment where I've added a jhipster node and builder image. I've also added a few extras which means you get a few more kubernetes benefits:
NOTE the first shiftwork deployment will fail readiness check because its waiting for the postgres service to be ready. Kubernetes will automatically restart the pod after a short while and connect to postgress.
I'll be around from Monday in case you have any questions or run into problems @magick93, fingers crossed this gets you going!
BTW I forgot to say rather than importing the project in the fabric8 UI, for now I just created a new Jenkins pipeline job and pointed the pipeline (using the SCM option) to the github project. The rest should just work.
Thank you so much!
I'm testing this, and certainly seem to be getting further, however the build still fails with the below. Maybe this is an OpenShift error - Im not sure. Does this error mean the image cannot be pushed to the internal registry?
[INFO] --- fabric8-maven-plugin:3.2.20:build (default) @ shiftwork ---
[INFO] F8: Using OpenShift build with strategy S2I
[INFO] Copying files to /home/jenkins/workspace/copy-of-sw/target/docker/staffrostering/shiftwork/1.0.128/build/maven
[INFO] Building tar: /home/jenkins/workspace/copy-of-sw/target/docker/staffrostering/shiftwork/1.0.128/tmp/docker-build.tar
[INFO] F8: [staffrostering/shiftwork:1.0.128]: Created docker source tar /home/jenkins/workspace/copy-of-sw/target/docker/staffrostering/shiftwork/1.0.128/tmp/docker-build.tar
[INFO] F8: Creating BuildConfig shiftwork-s2i for Source build
[INFO] F8: Adding to ImageStream shiftwork
[INFO] F8: Starting Build shiftwork-s2i
sh-4.2# exit
exit
[INFO] F8: Waiting for build shiftwork-s2i-1 to complete...
[INFO] F8: warning: Image sha256:1a13e31efd4b230e2c061ed8d07e479f0dfd38f9ebe6f380e8db3d111d1ec577 does not contain a value for the io.openshift.s2i.scripts-url label
[INFO] F8: Receiving source from STDIN as archive ...
[INFO] F8: error: build error: failed to install [assemble run]
[INFO] WebSocket successfully opened
[INFO] F8: Build shiftwork-s2i-1 status: Failed
[ERROR] F8: OpenShift Build shiftwork-s2i-1 Failed
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:33 min
[INFO] Finished at: 2017-02-11T11:29:18+00:00
[INFO] Final Memory: 102M/1744M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.fabric8:fabric8-maven-plugin:3.2.20:build (default) on project shiftwork: OpenShift Build shiftwork-s2i-1 Failed -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal io.fabric8:fabric8-maven-plugin:3.2.20:build (default) on project shiftwork: OpenShift Build shiftwork-s2i-1 Failed
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: OpenShift Build shiftwork-s2i-1 Failed
at io.fabric8.maven.core.openshift.OpenShiftBuildService.waitForOpenShiftBuildToComplete(OpenShiftBuildService.java:80)
at io.fabric8.maven.plugin.mojo.build.BuildMojo.executeOpenShiftBuild(BuildMojo.java:333)
at io.fabric8.maven.plugin.mojo.build.BuildMojo.buildAndTag(BuildMojo.java:247)
at io.fabric8.maven.docker.BuildMojoNoFork.executeInternal(BuildMojoNoFork.java:46)
at io.fabric8.maven.plugin.mojo.build.BuildMojo.executeInternal(BuildMojo.java:228)
at io.fabric8.maven.docker.AbstractDockerMojo.execute(AbstractDockerMojo.java:208)
at io.fabric8.maven.plugin.mojo.build.BuildMojo.execute(BuildMojo.java:211)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
... 20 more
[INFO] WebSocket close received. code: 1000, reason:
[ERROR]
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[WARNING] Ignoring onClose for already closed/closing websocket
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
$ oc describe pod shiftwork-s2i-2-build
Name: shiftwork-s2i-2-build
Namespace: default
Security Policy: jenkins
Node: 176.9.36.15/176.9.36.15
Start Time: Sat, 11 Feb 2017 13:12:10 +0100
Labels: openshift.io/build.name=shiftwork-s2i-2
Status: Failed
IP: 172.17.0.14
Controllers: <none>
Containers:
sti-build:
Container ID: docker://49d0df757a58d220a5f0f1e485d284773975dcd00d4adb3d7c4561ee1443d376
Image: openshift/origin-sti-builder:v1.4.1
Image ID: docker-pullable://docker.io/openshift/origin-sti-builder@sha256:c46d9a24e59019032d21acdbb45fcdfce2eed2ac111a5961f28f5023b7f7aaab
Port:
Args:
--loglevel=0
State: Terminated
Reason: Error
Exit Code: 1
Started: Sat, 11 Feb 2017 13:12:16 +0100
Finished: Sat, 11 Feb 2017 13:12:21 +0100
Ready: False
Restart Count: 0
Volume Mounts:
/var/run/docker.sock from docker-socket (rw)
/var/run/secrets/kubernetes.io/serviceaccount from builder-token-1mmaj (ro)
/var/run/secrets/openshift.io/push from builder-dockercfg-x84j4-push (ro)
Environment Variables:
BUILD: {"kind":"Build","apiVersion":"v1","metadata":{"name":"shiftwork-s2i-2","namespace":"default","selfLink":"/oapi/v1/namespaces/default/builds/shiftwork-s2i-2","uid":"50334373-f053-11e6-a274-406186becd9d","resourceVersion":"660382","creationTimestamp":"2017-02-11T12:12:09Z","labels":{"buildconfig":"shiftwork-s2i","group":"com.teammachine.staffrostering","openshift.io/build-config.name":"shiftwork-s2i","openshift.io/build.start-policy":"Serial","project":"shiftwork","provider":"fabric8","version":"1.0.128"},"annotations":{"openshift.io/build-config.name":"shiftwork-s2i","openshift.io/build.number":"2"}},"spec":{"serviceAccount":"builder","source":{"type":"Binary","binary":{}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"docker.io/fabric8/java-jboss-openjdk8-jdk:1.0.10"}}},"output":{"to":{"kind":"DockerImage","name":"172.30.139.137:5000/default/shiftwork:1.0.129"},"pushSecret":{"name":"builder-dockercfg-x84j4"}},"resources":{},"postCommit":{},"nodeSelector":{},"triggeredBy":null},"status":{"phase":"New","outputDockerImageReference":"172.30.139.137:5000/default/shiftwork:1.0.129","config":{"kind":"BuildConfig","namespace":"default","name":"shiftwork-s2i"}}}
ORIGIN_VERSION: v1.4.1+3f9807a
PUSH_DOCKERCFG_PATH: /var/run/secrets/openshift.io/push
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
docker-socket:
Type: HostPath (bare host directory volume)
Path: /var/run/docker.sock
builder-dockercfg-x84j4-push:
Type: Secret (a volume populated by a Secret)
SecretName: builder-dockercfg-x84j4
builder-token-1mmaj:
Type: Secret (a volume populated by a Secret)
SecretName: builder-token-1mmaj
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
42m 42m 1 {default-scheduler } Normal Scheduled Successfully assigned shiftwork-s2i-2-build to 176.9.36.15
42m 42m 1 {kubelet 176.9.36.15} spec.containers{sti-build} Normal Pulled Container image "openshift/origin-sti-builder:v1.4.1" already present on machine
42m 42m 1 {kubelet 176.9.36.15} spec.containers{sti-build} Normal Created Created container with docker id 49d0df757a58; Security:[seccomp=unconfined]
42m 42m 1 {kubelet 176.9.36.15} spec.containers{sti-build} Normal Started Started container with docker id 49d0df757a58
I think the remaining issue is similar to https://bugzilla.redhat.com/show_bug.cgi?id=1324194
I managed to get pass the previous error by changing
<docker.from>docker.io/fabric8/java-jboss-openjdk8-jdk:1.0.10</docker.from>
to
<docker.from>fabric8/s2i-java</docker.from>
Now, however, I'm once again getting Cannot connect to the Docker daemon. Is the docker daemon running on this host?
[INFO] F8: Starting S2I Java Build .....
[INFO] F8: S2I binary build from fabric8-maven-plugin detected
[INFO] F8: Copying binaries from /tmp/src/maven to /deployments ...
[INFO] F8: ... done
[INFO] F8:
[INFO] F8:
[INFO] F8: Pushing image 172.30.139.137:5000/default/shiftwork:1.0.134 ...
[INFO] F8: Pushed 0/23 layers, 0% complete
[INFO] F8: Pushed 1/23 layers, 4% complete
[INFO] F8: Pushed 2/23 layers, 9% complete
[INFO] F8: Pushed 3/23 layers, 13% complete
[INFO] F8: Pushed 4/23 layers, 18% complete
[INFO] F8: Pushed 5/23 layers, 22% complete
[INFO] F8: Pushed 6/23 layers, 27% complete
[INFO] F8: Pushed 7/23 layers, 32% complete
[INFO] F8: Pushed 8/23 layers, 36% complete
[INFO] F8: Pushed 9/23 layers, 40% complete
[INFO] F8: Pushed 10/23 layers, 45% complete
[INFO] F8: Pushed 11/23 layers, 50% complete
[INFO] F8: Pushed 12/23 layers, 55% complete
[INFO] F8: Pushed 13/23 layers, 58% complete
[INFO] F8: Pushed 14/23 layers, 64% complete
[INFO] F8: Pushed 15/23 layers, 70% complete
[INFO] F8: Pushed 16/23 layers, 73% complete
[INFO] F8: Pushed 17/23 layers, 76% complete
[INFO] F8: Pushed 18/23 layers, 80% complete
[INFO] F8: Pushed 19/23 layers, 85% complete
[INFO] F8: Pushed 20/23 layers, 90% complete
[INFO] F8: Pushed 21/23 layers, 96% complete
[INFO] F8: Pushed 22/23 layers, 99% complete
[INFO] F8: Pushed 23/23 layers, 100% complete
[INFO] F8: Push successful
[INFO] F8: Build shiftwork-s2i-6 status: Complete
[INFO] F8: Build shiftwork-s2i-6 Complete
[INFO] WebSocket close received. code: 1000, reason:
[WARNING] Ignoring onClose for already closed/closing websocket
[INFO] F8: Found tag on ImageStream shiftwork tag: sha256:6fefcf8a224cdebde0b4e07539fc15721a6de2c2e062a9b66afa51d9c74d9b44
[INFO] F8: ImageStream shiftwork written to /home/jenkins/workspace/copy-of-sw/target/shiftwork-is.yml
[INFO]
[INFO] --- sortpom-maven-plugin:2.5.0:sort (default) @ shiftwork ---
[INFO] Sorting file /home/jenkins/workspace/copy-of-sw/pom.xml
[INFO] Saved backup of /home/jenkins/workspace/copy-of-sw/pom.xml to /home/jenkins/workspace/copy-of-sw/pom.xml.bak
[INFO] Saved sorted pom file to /home/jenkins/workspace/copy-of-sw/pom.xml
[INFO]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ shiftwork ---
[INFO] Installing /home/jenkins/workspace/copy-of-sw/target/shiftwork-1.0.134.war to /root/.mvnrepository/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134.war
[INFO] Installing /home/jenkins/workspace/copy-of-sw/pom.xml to /root/.mvnrepository/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134.pom
[INFO] Installing /home/jenkins/workspace/copy-of-sw/target/classes/META-INF/fabric8/openshift.yml to /root/.mvnrepository/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-openshift.yml
[INFO] Installing /home/jenkins/workspace/copy-of-sw/target/classes/META-INF/fabric8/openshift.json to /root/.mvnrepository/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-openshift.json
[INFO] Installing /home/jenkins/workspace/copy-of-sw/target/classes/META-INF/fabric8/kubernetes.yml to /root/.mvnrepository/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-kubernetes.yml
[INFO] Installing /home/jenkins/workspace/copy-of-sw/target/classes/META-INF/fabric8/kubernetes.json to /root/.mvnrepository/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-kubernetes.json
[INFO]
[INFO] --- maven-deploy-plugin:2.8.2:deploy (default-deploy) @ shiftwork ---
[INFO] Using alternate deployment repository local-nexus::default::http://nexus/content/repositories/staging/
[INFO] Uploading: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134.war
[INFO] Uploaded: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134.war (75394 KB at 30462.2 KB/sec)
[INFO] Uploading: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134.pom
[INFO] Uploaded: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134.pom (38 KB at 418.2 KB/sec)
[INFO] Downloading: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/maven-metadata.xml
[INFO] Downloaded: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/maven-metadata.xml (993 B at 3.1 KB/sec)
[INFO] Uploading: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/maven-metadata.xml
[INFO] Uploaded: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/maven-metadata.xml (2 KB at 26.4 KB/sec)
[INFO] Uploading: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-openshift.yml
[INFO] Uploaded: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-openshift.yml (7 KB at 59.7 KB/sec)
[INFO] Uploading: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-openshift.json
[INFO] Uploaded: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-openshift.json (9 KB at 487.0 KB/sec)
[INFO] Uploading: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-kubernetes.yml
[INFO] Uploaded: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-kubernetes.yml (7 KB at 318.4 KB/sec)
[INFO] Uploading: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-kubernetes.json
[INFO] Uploaded: http://nexus/content/repositories/staging/com/teammachine/staffrostering/shiftwork/1.0.134/shiftwork-1.0.134-kubernetes.json (9 KB at 126.1 KB/sec)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:39 min
[INFO] Finished at: 2017-02-11T17:15:46+00:00
sh-4.2# exit
exit
[INFO] Final Memory: 110M/2300M
[INFO] ------------------------------------------------------------------------
[Pipeline] echo
Running on a single node, skipping docker push as not needed
[Pipeline] sh
[copy-of-sw] Running shell script
Executing shell script inside container [jhipster] of pod [kubernetes-080210addb894c2eaa11f873084d57cc-ffca95872e09]
Executing command: sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-70d741ab/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspace/copy-of-sw@tmp/durable-70d741ab/script.sh' > '/home/jenkins/workspace/copy-of-sw@tmp/durable-70d741ab/jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/workspace/copy-of-sw@tmp/durable-70d741ab/jenkins-result.txt'
sh-4.2# cd /home/jenkins/workspace/copy-of-sw
sh-4.2# sh -c echo $$ > '/home/jenkins/workspace/copy-of-sw@tmp/durable-70d74
<ome/jenkins/workspace/copy-of-sw@tmp/durable-70d741 ab/pid'; jsc=durable-e271
<-of-sw@tmp/durable-70d741ab/pid'; jsc=durable-e271a 92b5b252a3996e9c5847d57b9
<b/pid'; jsc=durable-e271a92b5b252a3996e9c5847d57b90 c; JENKINS_SERVER_COOKIE=
<2b5b252a3996e9c5847d57b90c; JENKINS_SERVER_COOKIE=$ jsc '/home/jenkins/worksp
<; JENKINS_SERVER_COOKIE=$jsc '/home/jenkins/workspa ce/copy-of-sw@tmp/durable
<sc '/home/jenkins/workspace/copy-of-sw@tmp/durable- 70d741ab/script.sh' > '/h
<e/copy-of-sw@tmp/durable-70d741ab/script.sh' > '/ho me/jenkins/workspace/copy
<0d741ab/script.sh' > '/home/jenkins/workspace/copy- of-sw@tmp/durable-70d741a
<e/jenkins/workspace/copy-of-sw@tmp/durable-70d741ab /jenkins-log.txt' 2>&1; e
<f-sw@tmp/durable-70d741ab/jenkins-log.txt' 2>&1; ec ho $? > '/home/jenkins/wo
<jenkins-log.txt' 2>&1; echo $? > '/home/jenkins/wor kspace/copy-of-sw@tmp/dur
<o $? > '/home/jenkins/workspace/copy-of-sw@tmp/dura ble-70d741ab/jenkins-resu
<space/copy-of-sw@tmp/durable-70d741ab/jenkins-resul t.txt'
+ docker tag staffrostering/shiftwork:1.0.134 172.30.254.212:80/staffrostering/shiftwork:1.0.134
sh-4.2# exit
exit
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
Trying to get a clearly understanding of execution flow, is it using S2i?
I'm a little confused.
I have added:
def s2iMode = flow.isOpenShiftS2I()
echo "s2i mode: ${s2iMode}"
Which displays s2i mode: false
But I also see:
Starting S2I Java Build .....
S2I binary build from fabric8-maven-plugin detected
@rawlingsj - I have been trying to apply some of your suggestions from your replies to my thread on https://groups.google.com/forum/#!msg/fabric8/IxpDpNLKBKo/g3c-t7m0GgAJ as again we get this error. However I cannot seem to get past this.
I have tried modifying the Jenkinfile - and probably made a complete ballsup of it - https://github.com/rawlingsj/shiftwork/commit/c5cee732de88ed76d231610c983159ad7d23b569 attempt to set required values so connection to the Docker daemon can be made.
Confirm that env vars are being set using the fabric8 syntax.
This is strange. In https://github.com/rawlingsj/shiftwork/commit/e24021e643f25e0b47db11d8ac114a050c29c518 I added echo "DOCKER_CONFIG is :${env.DOCKER_CONFIG}"
but it outputs DOCKER_CONFIG is :null
.
Problem
When running a Jenkins job, job fails with:
Describe Pod
Describe Jenkins SCC
Jenkinsfile
Attempted fixes
oadm policy add-scc-to-user anyuid system:serviceaccount:default:jenkins
//didnt workoadm policy add-scc-to-user privileged system:serviceaccount:default:jenkins
//didnt workoadm policy add-scc-to-group jenkins root
//didnt work