Closed ksingh7 closed 6 years ago
@rootfs yes i have used those rules but no effect.
I think i managed to get a workaround for this. Will post here if the testing goes well.
helm delete --purge ceph1 --debug
kubectl delete jobs -n ceph ceph-namespace-client-key-cleaner-hw95v
helm delete --purge ceph1 --debug
while it is executing open the second terminal, list undeleted resources kubectl get all -n ceph
, if there appears any undeleted resource, delete it manuallly kubectl delete jobs -n ceph ceph-namespace-client-key-cleaner-hw95v
These instructions worked form me
[root@admin-node tmp]# helm delete --purge ceph --debug
[debug] Created tunnel using local port: '44392'
[debug] SERVER: "127.0.0.1:44392"
release "ceph" deleted
[root@admin-node tmp]#
[root@admin-node tmp]# helm list
[root@admin-node tmp]# echo $?
0
[root@admin-node tmp]#
Is this a request for help?: yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
Which chart: Ceph
What happened: Unable to release ceph namespace from Helm after a few objects missing in helm status
There are no resources at kubernetes level. I think they got deleted with i first ran
helm delete ceph
Also tried removing helm tiller and doing re-init
What you expected to happen: helm delete --purge should delete the helm namespace
How to reproduce it (as minimally and precisely as possible):
helm delete ceph
helm delete ceph --purge
Anything else we need to know: