Open pintojoy opened 2 months ago
I see this issue here also: https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/738/24928/1208693/1208719/log. It seems that the problem is only with vSphere. With AWS, it passed, as you can see here: https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/738/24837/1203821/1203848/log.
self = <tests.functional.z_cluster.nodes.test_node_replacement_proactive.TestNodeReplacementWithIO object at 0x7f50dcace7f0> pvc_factory = <function pvc_factory_fixture..factory at 0x7f50b2b10160> pod_factory = <function pod_factory_fixture..factory at 0x7f50b2b108b0> dc_pod_factory = <function dc_pod_factory..factory at 0x7f50bc404040> bucket_factory = <function bucket_factory_fixture.._create_buckets at 0x7f50b1190550> rgw_bucket_factory = <function bucket_factory_fixture.._create_buckets at 0x7f50b1190dc0>
def test_nodereplacement_proactive_with_io_running( self, pvc_factory, pod_factory, dc_pod_factory, bucket_factory, rgw_bucket_factory, ): """ Knip-894 Node Replacement proactive when IO running in the background
self.sanity_helpers.health_check(tries=120)
tests/functional/z_cluster/nodes/test_node_replacement_proactive.py:247:
ocs_ci/helpers/sanity_helpers.py:51: in health_check ceph_health_check( ocs_ci/utility/utils.py:2396: in ceph_health_check return retry( ocs_ci/utility/retry.py:49: in f_retry return f(args, *kwargs)
namespace = 'openshift-storage'
def ceph_health_check_base(namespace=None): """ Exec
ceph health
cmd on tools pod to determine health of cluster.E ocs_ci.ocs.exceptions.CephHealthException: Ceph cluster health is not OK. Health: HEALTH_WARN Degraded data redundancy: 1201875/5833671 objects degraded (20.602%), 20 pgs degraded, 20 pgs undersized