loft-sh / vcluster

vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
https://www.vcluster.com
Apache License 2.0
6.16k stars 372 forks source link

Workloads Recreated automatically on deleting and creating the vCluster #29

Closed harish0619 closed 3 years ago

harish0619 commented 3 years ago

After creating a vCluster, as per the getting started documentation, I deployed an nginx workload. Everything worked as expected. After which, I deleted the vCluster and things worked well. Host cluster namespace had no resources. Then, when I created a new vCluster with the same name as the one I deleted before, the workload I had created before also came up and its pods were running too.

Seems to me as if it's a bug.

Another observation, if I delete the host cluster namespace where I create the vCluster and then create the namespace again with the same name and create the vCluster with the same name, the problem is not observed. Seems to be some cache issue or a syncer issue with scheduler.

#Snap1 - Here, I have couple of deployments within the vcluster and its pods are running in the host cluster namespace.

root@PoCVM-F5:~# kubectl get all -n host-namespace-1 NAME READY STATUS RESTARTS AGE pod/coredns-854c77959c-dcmww-x-kube-system-x-vcluster-1 1/1 Running 0 39h pod/nginx-deployment-84cd76b964-9k7ts-x-demo-nginx-x-vcluster-1 1/1 Running 0 39h pod/rook-nfs-operator-5fd99c7c8-dcnnj-x-rook-nfs-system--b1cadd30c5 1/1 Running 0 39h pod/vcluster-1-0 2/2 Running 0 39h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-dns-x-kube-system-x-vcluster-1 ClusterIP 10.108.62.254 53/UDP,53/TCP,9153/TCP 39h service/vcluster-1 ClusterIP 10.99.106.19 443/TCP 39h service/vcluster-1-headless ClusterIP None 443/TCP 39h

NAME READY AGE statefulset.apps/vcluster-1 1/1 39h

#Snap2 - Here, I delete the vcluster-1 which was already running within the host-namespace-1 ns in host cluster

root@PoCVM-F5:~# vCluster delete vcluster-1 -n host-namespace-1 [info] Delete helm chart with helm delete vcluster-1 --namespace host-namespace-1 --kubeconfig /tmp/910487953 --repository-config='' [done] √ Successfully deleted virtual cluster vcluster-1 in namespace host-namespace-1 root@PoCVM-F5:~# kubectl get all -n host-namespace-1 No resources found in host-namespace-1 namespace.

#Snap3 - Here, I have created ( supposedly ) a new vCluster but with the same name and I see the workloads from the deleted vCluster running on the host-namespace-1

root@PoCVM-F5:~# vcluster create vcluster-1 -n host-namespace-1 [info] execute command: helm upgrade vcluster-1 vCluster --repo https://charts.loft.sh --kubeconfig /tmp/000012067 --namespace host-namespace-1 --install --repository-config='' --values /tmp/293892134 [done] √ Successfully created virtual cluster vcluster-1 in namespace host-namespace-1. Use 'vcluster connect vcluster-1 --namespace host-namespace-1' to access the virtual cluster root@PoCVM-F5:~# kubectl get all -n host-namespace-1 NAME READY STATUS RESTARTS AGE pod/coredns-854c77959c-rnmnx-x-kube-system-x-vcluster-1 1/1 Running 0 33s pod/nginx-deployment-84cd76b964-9k7ts-x-demo-nginx-x-vcluster-1 1/1 Running 0 36s pod/rook-nfs-operator-5fd99c7c8-dcnnj-x-rook-nfs-system--b1cadd30c5 1/1 Running 0 36s pod/vcluster-1-0 2/2 Running 0 41s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kube-dns-x-kube-system-x-vcluster-1 ClusterIP 10.103.121.6 53/UDP,53/TCP,9153/TCP 36s service/vcluster-1 ClusterIP 10.99.6.18 443/TCP 41s service/vcluster-1-headless ClusterIP None 443/TCP 41s

NAME READY AGE statefulset.apps/vcluster-1 1/1 41s

FabianKramm commented 3 years ago

@harish0619 thanks for creating the issue! This probably is the case because the PVC was not deleted after you have deleted the vcluster. This is currently how statefulsets are working within Kubernetes, you can check https://github.com/kubernetes/kubernetes/issues/55045 for more information.

What we could think about is deleting this PVC automatically during vcluster delete if you specify a flag like --delete-pvc, which would solve your problem

harish0619 commented 3 years ago

@FabianKramm thanks for your swift response! So this --delete-pvc flag be an enhancement over existing command?

FabianKramm commented 3 years ago

@harish0619 this should already work now with v0.3.0-alpha.2, where vcluster will also delete the PVC by default. If you do not wish to delete the PVC you can pass vcluster delete ... --keep-pvc

harish0619 commented 3 years ago

@FabianKramm Thank you!

richburroughs commented 3 years ago

Looks like this was resolved.