Closed arschles closed 7 years ago
The same flake appears to be in https://service-catalog-jenkins.appspot.com/job/service-catalog-PR-testing2/142/console, except the controller pod was reported as not up successfully:
+ error_exit 'Controller pod did not come up successfully.'
+ echo '/var/lib/jenkins/workspace/service-catalog-PR-testing2/src/github.com/kubernetes-incubator/service-catalog/contrib/hack/test_walkthrough.sh: line 87: Controller pod did not come up successfully. (exit 1)'
/var/lib/jenkins/workspace/service-catalog-PR-testing2/src/github.com/kubernetes-incubator/service-catalog/contrib/hack/test_walkthrough.sh: line 87: Controller pod did not come up successfully. (exit 1)
Is this issue more pod focused or on the random issues we seem to be having with servers?
I've seen the a lot of local integration test failures lately. Due to timeouts waiting for servers to come up. Some of the time it seems like it's failures to contact after what should have been successful uses, meaning it's somehow becoming uncontactable after running for a little while.
@MHBauer I've seen the same flakes in travis's integration tests. those do not run our components inside pods afaik, correct?
Correct. I have not seen those myself, hence my description. Sounds like there's some general weirdness then.
Definitely general weirdness. There seem to be three different things being reported in this issue:
I think some of the Jenkins changes that went in the other day should alleviate the first two. The integration test failures are a separate issue.
I'm going to close this issue for now and make a separate one just for the integration tests timing out. If we see any non-integration-test flakes, then let's make separate issues for them as they come.
Jenkins build #106, which started from https://github.com/kubernetes-incubator/service-catalog/pull/642, caused a flake. The offending logs seem to be these:
cc/ @kibbles-n-bytes