Open AnnaDutova opened 2 years ago
Describe the bug After a node is removed via PMM, the ac and drive CSI node resources are left behind.
To Reproduce Deploy csi on kind cluster with pre-installed images, apply testing pod and check ac and drive CSI node resources Step-by-step:
ubuntu:/Workspace/csi-baremetal/tests/app # kubectl apply -f nginx.yaml statefulset.apps/web created ubuntu:/mnt/hgfs/Workspace/csi-baremetal/tests/app # kubectl get drive NAME SIZE TYPE HEALTH SERIAL NUMBER NODE SLOT 028704fd-2144-4914-992c-004a1cf822f5 105906176 HDD GOOD LOOPBACK714682596 9f79f7a1-92aa-414d-9bb4-a196469aa8d7 0e57b9f8-3d8b-48b5-98d6-dfd75083a951 105906176 HDD GOOD LOOPBACK2850093900 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e 1c9454ba-1265-4612-a390-89a53295ea53 105906176 HDD GOOD LOOPBACK1646729941 057b8330-d316-4153-b9c4-fe9519b820ed 34b34f9d-6460-4028-93c8-2c35cea8cbcd 105906176 HDD GOOD LOOPBACK3196335553 057b8330-d316-4153-b9c4-fe9519b820ed 3770bab1-faef-415e-8b57-50c6af026960 105906176 HDD GOOD LOOPBACK1350761576 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e 3e5a35a9-fd6e-4b13-b74c-44c912d3b748 105906176 HDD GOOD LOOPBACK2698160428 9f79f7a1-92aa-414d-9bb4-a196469aa8d7 6c70896e-1e38-4784-bc9b-1e35c575781f 105906176 HDD GOOD LOOPBACK402576039 057b8330-d316-4153-b9c4-fe9519b820ed a337ce73-d699-4263-ade9-72f303271cc9 105906176 HDD GOOD LOOPBACK3191065189 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e f1ef4d0c-44db-442f-b422-a76b44384baa 105906176 HDD GOOD LOOPBACK744326882 9f79f7a1-92aa-414d-9bb4-a196469aa8d7 ubuntu:/mnt/hgfs/Workspace/csi-baremetal/tests/app # kubectl get ac NAME SIZE STORAGE CLASS LOCATION NODE 1e4daabf-7029-49ba-903d-275656fb8ade HDD a337ce73-d699-4263-ade9-72f303271cc9 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e 3a464acf-43de-4b62-b4db-f35b24a9a589 HDD 0e57b9f8-3d8b-48b5-98d6-dfd75083a951 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e 3a633830-58fe-4373-9688-a705136128ed HDD 6c70896e-1e38-4784-bc9b-1e35c575781f 057b8330-d316-4153-b9c4-fe9519b820ed 988ed01e-615e-4188-8d80-7b47ebede055 HDD 1c9454ba-1265-4612-a390-89a53295ea53 057b8330-d316-4153-b9c4-fe9519b820ed 9d6cd543-83f5-4b43-8ba1-0bb8bd12b9ab HDD 3770bab1-faef-415e-8b57-50c6af026960 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e c6b594a1-5c1f-4785-ae19-f49dde317963 HDD f1ef4d0c-44db-442f-b422-a76b44384baa 9f79f7a1-92aa-414d-9bb4-a196469aa8d7 d16be3f2-903c-48f3-af83-7093200760c9 HDD 3e5a35a9-fd6e-4b13-b74c-44c912d3b748 9f79f7a1-92aa-414d-9bb4-a196469aa8d7 d95af279-2ea6-44aa-9fa8-9aa409128049 HDD 34b34f9d-6460-4028-93c8-2c35cea8cbcd 057b8330-d316-4153-b9c4-fe9519b820ed db987fff-4216-43ea-be9e-cd124c8c4404 HDD 028704fd-2144-4914-992c-004a1cf822f5 9f79f7a1-92aa-414d-9bb4-a196469aa8d7 ubuntu:/mnt/hgfs/Workspace/csi-baremetal/tests/app # kubectl get csibmnodes NAME UUID HOSTNAME NODE_IP csibmnode-057b8330-d316-4153-b9c4-fe9519b820ed 057b8330-d316-4153-b9c4-fe9519b820ed kind-worker 172.18.0.5 csibmnode-08d3a33a-6c67-4cef-940c-4190c78b5b3b 08d3a33a-6c67-4cef-940c-4190c78b5b3b kind-control-plane 172.18.0.3 csibmnode-6d54e593-e8d2-4433-b9a5-94d69a6c3f5e 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e kind-worker3 172.18.0.2 csibmnode-9f79f7a1-92aa-414d-9bb4-a196469aa8d7 9f79f7a1-92aa-414d-9bb4-a196469aa8d7 kind-worker2 172.18.0.4 ubuntu:/mnt/hgfs/Workspace/csi-baremetal/tests/app # kubectl taint node kind-worker3 node.dell.com/drain=drain:NoSchedule node/kind-worker3 tainted ubuntu:/mnt/hgfs/Workspace/csi-baremetal/tests/app # kubectl describe no kind-worker3 | grep Taint Taints: node.dell.com/drain=drain:NoSchedule ubuntu:/mnt/hgfs/Workspace/csi-baremetal/tests/app # kubectl delete node kind-worker3 node "kind-worker3" deleted ubuntu:/mnt/hgfs/Workspace/csi-baremetal/tests/app # kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready master 18m v1.19.11 kind-worker Ready <none> 18m v1.19.11 kind-worker2 Ready <none> 18m v1.19.11 ubuntu:/mnt/hgfs/Workspace/csi-baremetal/tests/app # watch kubectl get csibmnodes
Check result:
ubuntu:/mnt/hgfs/Workspace/csi-baremetal/tests/app # kubectl get csibmnodes | grep 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e ubuntu:/mnt/hgfs/Workspace/csi-baremetal/tests/app # kubectl get ac | grep 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e 3ca72f08-0b33-4419-ad31-b2acbd603404 HDD db974893-e971-4760-a96d-3ebfae5040ee 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e b847feeb-63b0-4e7b-b935-dc089ec1e5f2 HDD ddce47e3-4c40-4083-b4fd-b001ef04bb9e 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e f0a26ceb-3bb4-4e43-bda8-7452b7087fa3 HDD 0c586ea4-0169-433f-98be-62470a49d2f9 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e ubuntu:/mnt/hgfs/Workspace/csi-baremetal/tests/app # kubectl get drive | grep 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e 0c586ea4-0169-433f-98be-62470a49d2f9 105906176 HDD GOOD LOOPBACK3191065189 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e db974893-e971-4760-a96d-3ebfae5040ee 105906176 HDD GOOD LOOPBACK2850093900 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e ddce47e3-4c40-4083-b4fd-b001ef04bb9e 105906176 HDD GOOD LOOPBACK1350761576 6d54e593-e8d2-4433-b9a5-94d69a6c3f5e
Expected behavior After node removal procedure all dependencies should be removed
Unreproducible issue, will be monitored
Describe the bug After a node is removed via PMM, the ac and drive CSI node resources are left behind.
To Reproduce Deploy csi on kind cluster with pre-installed images, apply testing pod and check ac and drive CSI node resources Step-by-step:
Check result:
Expected behavior After node removal procedure all dependencies should be removed