Closed prietyc123 closed 4 years ago
How is this flake when it is failing CONSISTENTLY? :-)
This looks like more an issue with tests or infrastructure than odo /remove-kind bug /kind failing-test
How is this flake when it is failing CONSISTENTLY? :-)
I have observed it failing consistently on 4.5 cluster but talking about other cluster sometimes it gets passed sometimes failed. But mostly passing on which you can see in pr https://github.com/openshift/release/pull/9431.
And this is the reason I am considering it flake though not sure what is happening around. Anyway I will update the issue title
I observed till now that this is purely specific to 4.5 cluster. None of the time I have seen it passing on 4.5 and also did not get this failure on any of other cluster. Failure logs can been seen https://deck-ci.apps.ci.l2s4.p1.openshiftapps.com/pr-history/?org=openshift&repo=release&pr=9431
On 4.5 cluster(multistage) https://github.com/openshift/odo/blob/master/tests/integration/component.go#L506-L508 this closer function returns false all the time because it hits some error which is not equal to nil but the calling function CheckCmdOpInRemoteCmpPod
doesn't pass any error value to the closer function. The closer function doesn't take the argument nil explicitly passed by the above function even after having one more checks to verify the error before sending nil
to it.
exec output for s2i image package -rw-rw-r--. 1 1000630000 root 326 Jul 17 08:57 /opt/app-root/src/package.json
exec error for s2i image package I0717 08:57:47.177248 10879 request.go:621] Throttling request took
1.095986377s, request: GET:https://api.ci-op-0lpgbttl-712e6.origin-ci-int-aws.dev.rhcloud.com:6443/apis/apiextensions.k8s.io/v1?timeout=32s
The test code reporting both stdout and stderr. In stderr It has been throttling out may be due to the excessive api call specific on 4.5 multistage test infra as per one of the testplatform communication. According the slack message poster, he just reduce the excessive API requests to make it work. I have no idea how the api call are handled in odo. Anyway i will enable the verbose to get more logging information. Also I found one article https://access.redhat.com/solutions/3664861 related to storage having same issue we are facing with the resolution provided but I don’t find it helpful for our scenario.
Enabling the verbosity log could not help much. I believe I have given all the information regarding the failure. I need developer eyes to take it further with the info I have already provided.
Follow the steps To debug the failure:
Ping @kadel @girishramnani
Got to know form a kubernetes article that less memory may causes throttling error. Checking with more requested memory.
I also tried with increasing the resources https://github.com/openshift/release/pull/9431/commits/397964cbb6680964acc5c148c36c2deadf505d4d but it does not help PR History. Overall we need follow https://github.com/openshift/odo/issues/3547#issuecomment-660114118 to proceed further on this. For which I need developer to look into.
/kind bug
What versions of software are you using?
Operating System: All supported
Output of
odo version
: masterHow did you run odo exactly?
Running tests on openshift release repo pr https://github.com/openshift/release/pull/9431
Actual behavior
Expected behavior
It should get pushed.
Any logs, error output, etc?
More info https://deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gcs/origin-ci-test/pr-logs/pull/openshift_release/9431/rehearse-9431-periodic-ci-openshift-odo-master-v4.5-integration-e2e-periodic-steps/1282336940108025856#1:build-log.txt%3A646