Closed djoshy closed 2 weeks ago
@djoshy: This pull request explicitly references no jira issue.
/test e2e-aws-ovn-single-node-techpreview-serial
/retest-required
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: djoshy, yuqi-zhang
The full list of commands accepted by this bot can be found here.
The pull request process is described here
@djoshy: The following tests failed, say /retest
to rerun all failed tests or /retest-required
to rerun all mandatory failed tests:
Test name | Commit | Details | Required | Rerun command |
---|---|---|---|---|
ci/prow/e2e-agnostic-ovn-cmd | f12f3681157752fa948c634af99aae118167ec09 | link | false | /test e2e-agnostic-ovn-cmd |
ci/prow/e2e-metal-ipi-ovn-kube-apiserver-rollout | f12f3681157752fa948c634af99aae118167ec09 | link | false | /test e2e-metal-ipi-ovn-kube-apiserver-rollout |
ci/prow/e2e-metal-ipi-ovn | f12f3681157752fa948c634af99aae118167ec09 | link | false | /test e2e-metal-ipi-ovn |
Full PR test history. Your PR dashboard.
[ART PR BUILD NOTIFIER]
Distgit: openshift-enterprise-tests This PR has been included in build openshift-enterprise-tests-container-v4.18.0-202411051207.p0.ge6b7790.assembly.stream.el9. All builds following this will include this PR.
The last set of timeout failures are isolated to SNO, I suspect this is because the controller waits for "one" control plane node to be up-to date. This can take an unpredicatable amount of time, depending on other cluster variables. Increasing the timeout helped a bit, but whenever SNO goes through a "slow" patch, these failures will creep up again. Further, boot images updates are not applicable for SNO, as they are never scaled up after installation - so updating the machineset in such a case is moot. Let's skip these tests for the SNO cases.