Open am-agrawa opened 1 year ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 30 days if no further activity occurs.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
Recent failure on IBM cloud- https://url.corp.redhat.com/9cbbc8c ODF 4.15.7-2
0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 23m kube-proxy \n Normal NodeNotReady 27m node-controller Node 10.243.128.41 status is now: NodeNotReady\n Normal Starting 23m kubelet Starting kubelet.\n Normal NodeAllocatableEnforced 23m kubelet Updated Node Allocatable limit across pods\n Normal NodeHasSufficientMemory 23m (x8 over 23m) kubelet Node 10.243.128.41 status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 23m (x8 over 23m) kubelet Node 10.243.128.41 status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 23m (x7 over 23m) kubelet Node 10.243.128.41 status is now: NodeHasSufficientPID\n']
ocs_ci/ocs/node.py:201: ResourceWrongStatusException
Recent failure on IBM cloud- https://url.corp.redhat.com/9cbbc8c ODF 4.15.7-2
0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 23m kube-proxy \n Normal NodeNotReady 27m node-controller Node 10.243.128.41 status is now: NodeNotReady\n Normal Starting 23m kubelet Starting kubelet.\n Normal NodeAllocatableEnforced 23m kubelet Updated Node Allocatable limit across pods\n Normal NodeHasSufficientMemory 23m (x8 over 23m) kubelet Node 10.243.128.41 status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 23m (x8 over 23m) kubelet Node 10.243.128.41 status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 23m (x7 over 23m) kubelet Node 10.243.128.41 status is now: NodeHasSufficientPID\n'] ocs_ci/ocs/node.py:201: ResourceWrongStatusException
Another failure- https://url.corp.redhat.com/12be82b
Both [test_all_worker_nodes_short_network_failure[CephBlockPool]] and [CephFileSystem] are failing on ODF 4.15.7-2 over IBM ROKS.
This issue is being separated from issue #6840 to track separately and fix it over IBM deployment
https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/362/7573/332097/332165/332166/log