kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.26k stars 4.87k forks source link

Deleting persistent volume claims and persistend volumes on a multi-node minikube cluster does not delete files from disk #13320

Open victor-sudakov opened 2 years ago

victor-sudakov commented 2 years ago

What Happened?

When deleting PVCs and PVs on a multi-node minikube cluster, these resources are reported as non-existent by kubectl get pvc and kubectl get pv but the actual files remain on disk under /var/lib/docker/volumes/minikube-m02/_data/hostpath-provisioner/... Thus the old data can be unexpectedly resurrected when you redeploy a StatefulSet, for example.

How to reproduce

  1. Create a minikube cluster with at least two nodes
  2. Create a StatefulSet with persistent volumes
  3. Delete the StatefulSet
  4. Delete all the PVs and PVCs (kubectl. Lens whatever)
  5. Search for files in /var/lib/docker/volumes/minikube-m02/_data/hostpath-provisioner/ - they still will be there

What I expected

Those on-disk files should be wiped out when the corresponding PVs disappear from kubectl get pv output.

Workaround

Delete the files manually or use a single-node minikube cluster.

Attach the log file

This is minikube version: v1.24.0 on Manjaro Linux. The cluster was created as

minikube start --disk-size=50g --nodes=2 --cni="calico" --insecure-registry="192.168.38.0/24"

Operating System

Other

Driver

Docker

afbjorklund commented 2 years ago

So the volume for the first node is removed, but the volume for the second node is not ? Sounds like a bug, if so.

Probably not only the storage provisioner in that case, but everything else stored under /var (volume mountpoint).

victor-sudakov commented 2 years ago

So the volume for the first node is removed, but the volume for the second node is not ? Sounds like a bug, if so.

Exactly.

How to reproduce:

Install the attached manifest. It will create two pods with PVs. Visit each pod and touch a file in its persistent volume, then find the files on the host machine:

# find minikube*/ -type f | grep ubu
minikube/test/bigdisk-ubuntu-1/ubuntu-1.txt
minikube-m02/test/bigdisk-ubuntu-0/ubuntu-0.txt
#

Remove the manifest and the PVCs. The file on one of the nodes will remain, which I think is incorrect and inconsistent behaviour:

# find minikube*/ -type f | grep ubu
minikube-m02/test/bigdisk-ubuntu-0/ubuntu-0.txt
#

reproduce.yaml.txt

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

jsirianni commented 1 year ago

I think I am seeing this as well. When I perform a minikube delete, future clusters will have state from my previous cluster when I deploy the same application manifest.

minikube version
minikube version: v1.26.0
commit: f4b412861bb746be73053c9f6d2895f12cf78565

Fedora 37

mgruner commented 1 year ago

I can confirm this, too.

ceelian commented 11 months ago

Me too

glebpom commented 5 months ago

same problem

njlaw commented 3 months ago

I am having this occur with a single node minikube cluster as well. If a reproduction of this as well would be helpful, please let me know.