Closed jilju closed 4 weeks ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 30 days if no further activity occurs.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
Faced the issue in https://reportportal-ocs4.apps.ocp-c1.prod.psi.redhat.com/ui/#ocs/launches/465/21131/1009592/1009736/log
In addition to delay in the deletion of nfs-provisioner pod, some nfs pods still exists after the test test_nfs_feature_enable_for_ODF_clusters.
pod/csi-nfsplugin-holder-ocs-storagecluster-cephcluster-k2xdm pod/csi-nfsplugin-holder-ocs-storagecluster-cephcluster-lstkk pod/csi-nfsplugin-holder-ocs-storagecluster-cephcluster-wwrm4
This test run is from release-4.13 branch.
Adding brown squad label based on the test ownerships.
We haven't faced this recently. Closing
Test teardown session of the trest case given below failed due to leftovers from the test _test_nfs_feature_enable_for_ODFclusters.
tests/manage/pv_services/test_change_reclaim_policy_of_pv.py::TestChangeReclaimPolicyOfPv::test_change_reclaim_policy_of_pv[CephFileSystem-Retain]
The test _test_nfs_feature_enable_for_ODFclusters enabled nfs feature and and disabled at it's teardown session. After disabling nfs, the test didn't verify whether the resources were removed or not. csi-nfsplugin-provisioner pod was not deleted at the start of the next test which is _test_change_reclaim_policy_ofpv[CephFileSystem-Retain]. Some time later the pod csi-nfsplugin-provisioner was deleted and tests's leftover check observed a resource removal.