Closed paoloantinori closed 6 years ago
@paoloantinori could you check if the build pod is running, I have run into this when build takes too long to download the Maven artifacts and the scheduler deletes the pod, but this could be just due to my Internet connection. Also you can check the pod logs for the integration (if running) and the the output of oc get events
to see if there is something there.
21m 21m 1 first-2 ReplicationController Normal SuccessfulDelete replication-controller Deleted pod: first-2-hhbrs 20m 20m 1 second-1-build Pod Normal Scheduled default-scheduler Successfully assigned second-1-build to localhost 19m 19m 1 second-1-build Pod spec.containers{sti-build} Normal Pulled kubelet, localhost Container image "openshift/origin-sti-builder:v3.6.0" already present on machine 19m 19m 1 second-1-build Pod spec.containers{sti-build} Normal Started kubelet, localhost Started container 19m 19m 1 second-1-build Pod spec.containers{sti-build} Normal Created kubelet, localhost Created container 19m 19m 1 second-1 Build Normal BuildStarted build-controller Build myproject/second-1 is now running 19m 19m 9 second DeploymentConfig Warning DeploymentCreationFailed deploymentconfig-controller Couldn't deploy version 1: ReplicationController "second-1" is invalid: spec.template.spec.containers[0].image: Required value 19m 19m 1 second-1 Build Normal BuildCompleted build-controller Build myproject/second-1 completed successfully 19m 19m 1 second-1-deploy Pod Normal Scheduled default-scheduler Successfully assigned second-1-deploy to localhost 19m 19m 1 second DeploymentConfig Normal DeploymentCreated deploymentconfig-controller Created new replication controller "second-1" for version 1 19m 19m 9 second DeploymentConfig Normal DeploymentAwaitingCancellation deploymentconfig-controller Deployment of version 2 awaiting cancellation of older running deployments 19m 19m 1 second DeploymentConfig Normal DeploymentCancelled deploymentconfig-controller Cancelled deployment "second-1" superceded by version 2 19m 19m 1 second-1-deploy Pod spec.containers{deployment} Normal Created kubelet, localhost Created container 19m 19m 1 second-1-deploy Pod spec.containers{deployment} Normal Pulled kubelet, localhost Container image "openshift/origin-deployer:v3.6.0" already present on machine 19m 19m 1 second-1-deploy Pod spec.containers{deployment} Normal Started kubelet, localhost Started container 19m 19m 1 second-2-deploy Pod Normal Scheduled default-scheduler Successfully assigned second-2-deploy to localhost 19m 19m 1 second DeploymentConfig Normal DeploymentCreated deploymentconfig-controller Created new replication controller "second-2" for version 2 19m 19m 1 second DeploymentConfig Normal RolloutCancelled deployer-controller Rollout for "myproject/second-1" cancelled 19m 19m 1 second-2-deploy Pod spec.containers{deployment} Normal Created kubelet, localhost Created container 19m 19m 1 second-2-deploy Pod spec.containers{deployment} Normal Pulled kubelet, localhost Container image "openshift/origin-deployer:v3.6.0" already present on machine 19m 19m 1 second-2 ReplicationController Normal SuccessfulCreate replication-controller Created pod: second-2-n486q 19m 19m 1 second-2-deploy Pod spec.containers{deployment} Normal Started kubelet, localhost Started container 19m 19m 1 second-2-n486q Pod Normal Scheduled default-scheduler Successfully assigned second-2-n486q to localhost 18m 19m 4 second-2-n486q Pod spec.containers{second} Normal Pulled kubelet, localhost Successfully pulled image "172.30.1.1:5000/myproject/second@sha256:68cd2dad16d40d2694ef3a7c9795175c01812aafeff82f79b7697c0c2d9e0258" 18m 19m 4 second-2-n486q Pod spec.containers{second} Normal Pulling kubelet, localhost pulling image "172.30.1.1:5000/myproject/second@sha256:68cd2dad16d40d2694ef3a7c9795175c01812aafeff82f79b7697c0c2d9e0258" 18m 19m 4 second-2-n486q Pod spec.containers{second} Normal Started kubelet, localhost Started container 18m 19m 4 second-2-n486q Pod spec.containers{second} Normal Created kubelet, localhost Created container 14m 19m 25 second-2-n486q Pod Warning FailedSync kubelet, localhost Error syncing pod 18m 19m 5 second-2-n486q Pod spec.containers{second} Warning BackOff kubelet, localhost Back-off restarting failed container 17m 17m 1 second-1-build Pod Warning FailedMount kubelet, localhost Unable to mount volumes for pod "second-1-build_myproject(7870a0ab-b374-11e7-9d0b-ee75fe781227)": timeout expired waiting for volumes to attach/mount for pod "myproject"/"second-1-build". list of unattached/unmounted volumes=[docker-socket builder-dockercfg-1jg3l-push builder-token-t2nlf] 17m 17m 1 second-1-build Pod Warning FailedSync kubelet, localhost Error syncing pod 9m 9m 1 second-2 ReplicationController Normal SuccessfulDelete replication-controller Deleted pod: second-2-n486q 9m 9m 1 second DeploymentConfig Normal ReplicationControllerScaled deploymentconfig-controller Scaled replication controller "second-2" from 842697952144 to 0
$ oc log second-2-deploy
W1017 22:13:47.726157 6752 cmd.go:373] log is DEPRECATED and will be removed in a future version. Use logs instead.
--> Scaling second-2 to 1
--> Waiting up to 10m0s for pods in rc second-2 to become ready
error: update acceptor rejected second-2: pods for rc "second-2" took longer than 600 seconds to become ready
Thanks @zregvart , slow internet could be the issue here. I had problems today, initially even with the redeploy of syndesis and related error being slow to fetch docker images.
I could circumvent the issue with a local prepopulated docker registry, but I don't know how I could bypass this one + I probably don't have enough knowledge about what happens where and when in Syndesis.
I have retriggered the deploy with:
oc deploy second --retry
And I now notice (but maybe it was there already before) a pod with the following error:
22:23 $ oc logs second-2-f9ws4
Starting the Java application using /opt/run-java/run-java.sh ...
ERROR: Neither $JAVA_MAIN_CLASS nor $JAVA_APP_JAR is set and 0 found in /deployments (1 expected)
Despite logs for the build pod say:
Receiving source from STDIN as archive ... Pulling image "172.30.1.1:5000/myproject/second:latest" ... pulling image error : Error: image myproject/second:latest not found ================================================================== Starting S2I Java Build ..... S2I source build with plain binaries detected Copying binaries from /tmp/src to /deployments ... Checking for SpringBoot archive... ... done Pushing image 172.30.1.1:5000/myproject/second:latest ... Pushed 0/24 layers, 0% complete Pushed 1/24 layers, 4% complete Push successful
Is this a genuine bug or am I possibly doing something stupid in the UI?
@paoloantinori I don't think that those two integrations work at all, they are from demo-data.json
and probably need to be update to the latest schema. Perhaps it would make more sense to look at the integration you've created (-KwfsqOwzrhGzpr3fy5o
). Also you could be hitting the limit of one integration per install that we enforce for the tech preview.
On a freshly deploy instance of Syndesis, I can't deploy successfully an integration.
The integration is a simple "from timer to http endpoint with GET"
The UI behaves correctly, but the integration stays in "Pending" status forever.
Logs of the
syndesis-rest
controller show just this: