Open LarsMilland opened 8 years ago
Dam! We saw this before when we moved to the new f-m-p and used a QuickStart that hadn't been upgraded yet. Which QuickStart / project wizard did you select? Could you try with the Integration project wizard to test if that works please?
Hi
As far as I understand how this works, it would not make much difference to which "quickstart project wizard" I try, as I would guess the problem lies in the mavenCanaryRelease groovy pipeline code where the maven docker builds are tagged and pushed - but not really then pushed proper to the OpenShift/kubernetes cluster wide Docker repository.
So with a recent run of the "Camel integration project wizard" I am still getting same error:
[INFO] Building tar: /opt/jenkins/workspace/workspace/integration/target/docker/172.30.36.155/80/example/integration/1.0.2/tmp/docker-build.tar [INFO] DOCKER> docker-build.tar: Created [172.30.36.155:80/example/integration:1.0.2] in 121 milliseconds sh-4.2# exit exit [INFO] DOCKER> [172.30.36.155:80/example/integration:1.0.2]: Built image 28e6de31826f [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 01:01 min [INFO] Finished at: 2016-08-08T10:16:41+00:00 [INFO] Final Memory: 64M/658M [INFO] ------------------------------------------------------------------------ [Pipeline] tagImage Tagging image:example/integration:1.0.2 with tag:1.0.2. [Pipeline] } [Pipeline] // withPod [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline Unable to connect to Elasticsearch service. Check Elasticsearch is running in the correct namespace io.fabric8.docker.client.DockerClientException: Failure executing: POST at: http://localhost/images/example/integration:1.0.2/tag?tag=1.0.2&force=false&repo=172.30.36.155:80/example/integration. Status:404. Message: Not Found. Body: could not find image: no such id: example/integration:1.0.2
at io.fabric8.docker.client.impl.OperationSupport.requestFailure(OperationSupport.java:255)
at io.fabric8.docker.client.impl.OperationSupport.assertResponseCodes(OperationSupport.java:239)
at io.fabric8.docker.client.impl.OperationSupport.handleResponse(OperationSupport.java:191)
at io.fabric8.docker.client.impl.TagImage.withTagName(TagImage.java:88)
at io.fabric8.docker.client.impl.TagImage.withTagName(TagImage.java:33)
at io.fabric8.kubernetes.pipeline.TagImageStepExecution$1.call(TagImageStepExecution.java:50)
at io.fabric8.kubernetes.pipeline.TagImageStepExecution$1.call(TagImageStepExecution.java:41)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at hudson.remoting.Engine$1$1.run(Engine.java:69)
at java.lang.Thread.run(Thread.java:745)
at ......remote call to ec0693bdaddb8(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:220)
at hudson.remoting.Channel.call(Channel.java:781)
at io.fabric8.kubernetes.pipeline.TagImageStepExecution.run(TagImageStepExecution.java:41)
at io.fabric8.kubernetes.pipeline.TagImageStepExecution.run(TagImageStepExecution.java:28)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousStepExecution.start(AbstractSynchronousStepExecution.java:40)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:137)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:113)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:45)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:15)
at io.fabric8.kubernetes.pipeline.Kubernetes$TagImage.withTag(jar:file:/var/jenkins_home/plugins/kubernetes-pipeline-steps/WEB-INF/lib/kubernetes-pipeline-steps.jar!/io/fabric8/kubernetes/pipeline/Kubernetes.groovy:298)
at io.fabric8.kubernetes.pipeline.Kubernetes.node(jar:file:/var/jenkins_home/plugins/kubernetes-pipeline-steps/WEB-INF/lib/kubernetes-pipeline-steps.jar!/io/fabric8/kubernetes/pipeline/Kubernetes.groovy:33)
at io.fabric8.kubernetes.pipeline.Kubernetes$TagImage.withTag(jar:file:/var/jenkins_home/plugins/kubernetes-pipeline-steps/WEB-INF/lib/kubernetes-pipeline-steps.jar!/io/fabric8/kubernetes/pipeline/Kubernetes.groovy:297)
at mavenCanaryRelease.call(/var/jenkins_home/workflow-libs/vars/mavenCanaryRelease.groovy:24)
Yeah it's because the new f-m-p uses different defaults to build the docker image, defaulting to the artefact group id. This is the image that's tagged and pushed to the internal fabric8 docker register not the openshift one. @rhuss has anything changed recently as we're seeing these errors again.
Hi
I am continuing trying out the features of these Jenkins pipeline scripts, and is not getting it quite right to work on a clustered OpenShift environment, as the Docker images that are getting produced are only available on the Docker host where the builds are run.
The problem is that the push/tagging of the Docker images are not working, so builds stops when it tries to complete the pipeline step of placing the built Docker images to the fabric8 provided Docker repository as I understand it.
[INFO] Copying files to /opt/jenkins/workspace/workspace/makro/target/docker/172.30.36.155/80/example/makro/1.0.1/build/maven [INFO] Building tar: /opt/jenkins/workspace/workspace/makro/target/docker/172.30.36.155/80/example/makro/1.0.1/tmp/docker-build.tar [INFO] DOCKER> docker-build.tar: Created [172.30.36.155:80/example/makro:1.0.1] in 5 seconds sh-4.2# exit exit [INFO] DOCKER> [172.30.36.155:80/example/makro:1.0.1]: Built image fd4a5b91d825 [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 51.830 s [INFO] Finished at: 2016-08-04T10:03:48+00:00 [INFO] Final Memory: 53M/573M [INFO] ------------------------------------------------------------------------ [Pipeline] tagImage Tagging image:example/makro:1.0.1 with tag:1.0.1. [Pipeline] } [Pipeline] // withPod [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline Unable to connect to Elasticsearch service. Check Elasticsearch is running in the correct namespace io.fabric8.docker.client.DockerClientException: Failure executing: POST at: http://localhost/images/example/makro:1.0.1/tag?tag=1.0.1&force=false&repo=172.30.36.155:80/example/makro. Status:404. Message: Not Found. Body: could not find image: no such id: example/makro:1.0.1
at io.fabric8.docker.client.impl.OperationSupport.requestFailure(OperationSupport.java:255) at io.fabric8.docker.client.impl.OperationSupport.assertResponseCodes(OperationSupport.java:239) at io.fabric8.docker.client.impl.OperationSupport.handleResponse(OperationSupport.java:191) at io.fabric8.docker.client.impl.TagImage.withTagName(TagImage.java:88) at io.fabric8.docker.client.impl.TagImage.withTagName(TagImage.java:33) at io.fabric8.kubernetes.pipeline.TagImageStepExecution$1.call(TagImageStepExecution.java:50) at io.fabric8.kubernetes.pipeline.TagImageStepExecution$1.call(TagImageStepExecution.java:41) at hudson.remoting.UserRequest.perform(UserRequest.java:121) at hudson.remoting.UserRequest.perform(UserRequest.java:49) at hudson.remoting.Request$2.run(Request.java:326) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at hudson.remoting.Engine$1$1.run(Engine.java:69) at java.lang.Thread.run(Thread.java:745) at ......remote call to d855f2905ec3d(Native Method) at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416) at hudson.remoting.UserResponse.retrieve(UserRequest.java:220) at hudson.remoting.Channel.call(Channel.java:781) at io.fabric8.kubernetes.pipeline.TagImageStepExecution.run(TagImageStepExecution.java:41) at io.fabric8.kubernetes.pipeline.TagImageStepExecution.run(TagImageStepExecution.java:28) at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousStepExecution.start(AbstractSynchronousStepExecution.java:40) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:137) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:113) at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:45) at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108) at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.methodCall(DefaultInvoker.java:15) at io.fabric8.kubernetes.pipeline.Kubernetes$TagImage.withTag(jar:file:/var/jenkins_home/plugins/kubernetes-pipeline-steps/WEB-INF/lib/kubernetes-pipeline-steps.jar!/io/fabric8/kubernetes/pipeline/Kubernetes.groovy:298) at io.fabric8.kubernetes.pipeline.Kubernetes.node(jar:file:/var/jenkins_home/plugins/kubernetes-pipeline-steps/WEB-INF/lib/kubernetes-pipeline-steps.jar!/io/fabric8/kubernetes/pipeline/Kubernetes.groovy:33) at io.fabric8.kubernetes.pipeline.Kubernetes$TagImage.withTag(jar:file:/var/jenkins_home/plugins/kubernetes-pipeline-steps/WEB-INF/lib/kubernetes-pipeline-steps.jar!/io/fabric8/kubernetes/pipeline/Kubernetes.groovy:297) at mavenCanaryRelease.call(/var/jenkins_home/workflow-libs/vars/mavenCanaryRelease.groovy:24) at WorkflowScript.run(WorkflowScript:55)
Is it possible to perhaps change the process to use push to the OpenShift docker repository and not the local one only or the "fabric8" provided one - that is to finish the build process with promoting the produced image to the kubernetes/openshift image repository?
Best regards Lars Milland