hpe-storage / python-hpedockerplugin

HPE Native Docker Plugin
Apache License 2.0
36 stars 64 forks source link

3.3 MountConflictDelay :After deleting pod multipath -ll entries not cleaned. #743

Closed sandesh-desai closed 4 years ago

sandesh-desai commented 4 years ago

3.3 MountConflictDelay :After deleting pod multipath -ll entries not cleaned.

Testbed Details: Host: -- openshift single master setup. Host OS: "Red Hat Enterprise Linux Server"VERSION="7.6 OC version: oc v3.11.117

Steps followed on OpenShift 3.11 single-master

1) Create a pv,pvc,pod with the yml 2) pod/pod-provisionedvol are in Running state on worker-2 3) verify multipath -ll on worker-2 (entries are present) 4) verify multipath -ll on worker-1 (No entries) 5) Create additional POD by using same PVC pod / pod-provisionedvol2 are in Running state on worker-1. 6) Verify multipath -ll on worker-1 (entries are present) 7) Delete pod on worker-2
oc delete pod pod-provisionedvol 8) Verify multipath -ll on worker-2 entries are still present 9) Delete pod/ pod-provisionedvol2 on worker-1 oc delete pod pod-provisionedvol2 10) Verify multipath -ll on worker -1 ( No entries)

Attaching the log files MCD_W1.txt MCD_W2.txt

sonawane-shashikant commented 4 years ago

This bug is verifed as FIXED. Attaching output.

743_fixed.txt

sonawane-shashikant commented 4 years ago

@sandesh-desai please close this bug

sandesh-desai commented 4 years ago

Closing the bug as per above comment.

sandesh-desai commented 4 years ago

With Ansible installer on etcd cluster multipath -ll entries not cleaned.

Attaching the logs:

[Uploading dory.log…]() [Uploading TC_mount.txt…]() [Uploading 3pardcv.log…]()

Testbed Details: Host: -- Kubernetes single master setup. Host OS: "CentOS Linux 7" Kubectl version: GitVersion:"v1.15.1 IP: 15.212.196.113/114/115

sandesh-desai commented 4 years ago

Attaching the updated logs:

mount_conflict_Delay.txt

dcvlog_115.txt dory_114.txt dory_115.txt

imran-ansari commented 4 years ago

@sandesh-desai - please attach dcvlog_114.txt as well

sandesh-desai commented 4 years ago

Attaching dcvlog_114.

dcvlogs_114.txt

amitk1977 commented 4 years ago

This usecase of pods P1 on node 1 and pod P2 on node 2 using the same volume V1 is not a generally deployed scenario. This may be a usage error and does not qualify for a high bug. Will have to discuss further as a medium bug while we check the validity of this scenario.

amitk1977 commented 4 years ago

Documented and closed: https://github.com/hpe-storage/python-hpedockerplugin/blob/master/docs/recover_post_reboot.md

amitk1977 commented 4 years ago

Closing as document fix