Closed DanielOsypenko closed 5 months ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 30 days if no further activity occurs.
Below test on IBMZ is failing with same error.
Test: tests/e2e/scale/noobaa/test_scale_namespace_crd.py::TestScaleNamespace::test_scale_namespace_bucket_creation_crd[Scale-AWS-Cache]
E AssertionError: aws-ns-store-bc448608fd38484bafa683f5b0b did not reach a healthy state within 180 seconds.
I see similiar error for rgw related tests as well.
Test: tests/e2e/scale/noobaa/test_scale_namespace_crd.py::TestScaleNamespace::test_scale_namespace_bucket_creation_crd[Scale-RWG-RGW-Multi]
), f"{self.name} did not reach a healthy state within {timeout} seconds."
E AssertionError: rgw-ns-store-ac68649dfb0449e2b8beba05b07 did not reach a healthy state within 180 seconds.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 30 days if no further activity occurs.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
test_namespace_store_creation_rules is Ui test that verifies that second namespacestore with similar name can not be created. The test failing in preparation steps - to create a first namespacestore on aws to check then that it could not be created again via Ui. Error is repetitive.
2023-11-03 15:09:12 13:09:11 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: noobaa.io/v1alpha1 2023-11-03 15:09:12 kind: NamespaceStore 2023-11-03 15:09:12 metadata: 2023-11-03 15:09:12 finalizers: 2023-11-03 15:09:12 - noobaa.io/finalizer 2023-11-03 15:09:12 labels: 2023-11-03 15:09:12 app: noobaa 2023-11-03 15:09:12 name: aws-ns-store-c522f540f13c44bb888fa40bff1 2023-11-03 15:09:12 namespace: openshift-storage 2023-11-03 15:09:12 spec: 2023-11-03 15:09:12 awsS3: 2023-11-03 15:09:12 secret: 2023-11-03 15:09:12 name: secret-cldmgr-aws-c1ef008da26344d8a6f116 2023-11-03 15:09:12 namespace: openshift-storage 2023-11-03 15:09:12 targetBucket: aws-uls-c9bb04f0f54748a789b821ccaf463c24 2023-11-03 15:09:12 type: aws-s3 ... 2023-11-03 15:09:23 13:09:23 - MainThread - ocs_ci.utility.utils - ERROR - Exception raised during iteration: 'status' 2023-11-03 15:09:23 Traceback (most recent call last): 2023-11-03 15:09:23 File "/home/jenkins/workspace/qe-deploy-ocs-cluster-prod/ocs-ci/ocs_ci/utility/utils.py", line 1290, in iter 2023-11-03 15:09:23 yield self.func(*self.func_args, **self.func_kwargs) 2023-11-03 15:09:23 File "/home/jenkins/workspace/qe-deploy-ocs-cluster-prod/ocs-ci/ocs_ci/ocs/resources/namespacestore.py", line 151, in oc_verify_health 2023-11-03 15:09:23 OCP( 2023-11-03 15:09:23 KeyError: 'status' ... 2023-11-03 15:12:11 13:12:10 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 5 seconds before next iteration 2023-11-03 15:12:15 13:12:15 - MainThread - ocs_ci.ocs.resources.namespacestore - ERROR - aws-ns-store-c522f540f13c44bb888fa40bff1 did not reach a healthy state within 180 seconds. ... https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/9677/consoleFull https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/557/16404/785603/785704/785705/log?logParams=history%3D785705%26page.page%3D1