Closed ypersky1980 closed 7 months ago
ocs_ci/helpers/helpers.py:123: ResourceWrongStatusException
ocs_ci.ocs.exceptions.ResourceWrongStatusException: Resource pvc-test-461975600e394995aabe3963ab8ead1 describe output: Name: pvc-test-461975600e394995aabe3963ab8ead1
Namespace: namespace-pas-test-dd6d6842c94d4010a0926
StorageClass: storageclass-test-cephfs-d6781b17d9e142b
Status: Pending
Volume:
Labels:
Access Modes:
VolumeMode: Filesystem
Used By:
Normal Provisioning 63s openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-5d7f5644ff-2v5bx_ba13c42f-d69e-4560-8751-fbf62e079909 External provisioner is provisioning volume for claim "namespace-pas-test-dd6d6842c94d4010a0926/pvc-test-461975600e394995aabe3963ab8ead1" Normal ExternalProvisioning 3s (x6 over 63s) persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'openshift-storage.cephfs.csi.ceph.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
The above are the results of pod reattach time test in 4.15.
Rerun on VMware LSO and if passes - close the issue.
This is the run that was initiated on IBM cloud 4.14
The following test cases failed :
Error:
ocs_ci.ocs.exceptions.PerformanceException: Pod creation time is 87.04964661598206 and greater than 70 seconds
Error:
ocs_ci.ocs.exceptions.PerformanceException: Pod creation time is 491.2622141838074 and greater than 420 seconds
Conclusion: Submit a PR and increase Pod Creation time.
After the PR is merged - compare the results to 4.13 and consider opening a BZ.
CephFS test case - pod reattach with many files ( 800K) fails with :
Warning FailedAttachVolume 10m attachdetach-controller Multi-Attach error for volume "pvc-ecf47a0d-cd7a-4ba9-a2da-26cd5cea2cec" Volume is already exclusively attached to one node and can't be attached to another Normal SuccessfulAttachVolume 9m50s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ecf47a0d-cd7a-4ba9-a2da-26cd5cea2cec"
Checking wether pod DELETION when pod is with many files indeed is working fine.
Full error:
ocs_ci.ocs.exceptions.ResourceWrongStatusException: Resource pod-test-cephfs-2c0e9b8c65f240a098f6181c describe output: Name: pod-test-cephfs-2c0e9b8c65f240a098f6181c
Namespace: namespace-pas-test-dce771fe9db54be9a0ff0
Priority: 0
Service Account: default
Node: compute-1/10.1.160.247
Start Time: Mon, 18 Mar 2024 12:02:48 +0000
Labels:
Image: quay.io/ocsci/perf:latest
Image ID:
Port:
Warning FailedAttachVolume 10m attachdetach-controller Multi-Attach error for volume "pvc-ecf47a0d-cd7a-4ba9-a2da-26cd5cea2cec" Volume is already exclusively attached to one node and can't be attached to another Normal SuccessfulAttachVolume 9m50s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ecf47a0d-cd7a-4ba9-a2da-26cd5cea2cec" Normal AddedInterface 9m48s multus Add eth0 [10.128.2.89/23] from ovn-kubernetes Normal Pulled 106s (x5 over 9m48s) kubelet Container image "quay.io/ocsci/perf:latest" already present on machine Warning Failed 106s (x4 over 7m48s) kubelet Error: context deadline exceeded
https://github.com/red-hat-storage/ocs-ci/pull/9538 - PR with a fix.
The PR was merged, therefore closing the issue.
IBM cloud 4.14 job :
https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/33735/testReport/
Failing test cases:
tests.cross_functional.performance.csi_tests.test_pod_reattachtime.TestPodReattachTimePerformance.test_pod_reattach_time_performance[CephFileSystem-3-120-70] 12 min 1 tests.cross_functional.performance.csi_tests.test_pod_reattachtime.TestPodReattachTimePerformance.test_pod_reattach_time_performance[CephFileSystem-13-600-420]
Increase of pod creation time should be increased and after that please consider opening a performance bz .