Open victor-sudakov opened 2 years ago
So the volume for the first node is removed, but the volume for the second node is not ? Sounds like a bug, if so.
Probably not only the storage provisioner in that case, but everything else stored under /var
(volume mountpoint).
So the volume for the first node is removed, but the volume for the second node is not ? Sounds like a bug, if so.
Exactly.
How to reproduce:
Install the attached manifest. It will create two pods with PVs. Visit each pod and touch a file in its persistent volume, then find the files on the host machine:
# find minikube*/ -type f | grep ubu
minikube/test/bigdisk-ubuntu-1/ubuntu-1.txt
minikube-m02/test/bigdisk-ubuntu-0/ubuntu-0.txt
#
Remove the manifest and the PVCs. The file on one of the nodes will remain, which I think is incorrect and inconsistent behaviour:
# find minikube*/ -type f | grep ubu
minikube-m02/test/bigdisk-ubuntu-0/ubuntu-0.txt
#
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I think I am seeing this as well. When I perform a minikube delete
, future clusters will have state from my previous cluster when I deploy the same application manifest.
minikube version
minikube version: v1.26.0
commit: f4b412861bb746be73053c9f6d2895f12cf78565
Fedora 37
I can confirm this, too.
Me too
same problem
I am having this occur with a single node minikube cluster as well. If a reproduction of this as well would be helpful, please let me know.
What Happened?
When deleting PVCs and PVs on a multi-node minikube cluster, these resources are reported as non-existent by
kubectl get pvc
andkubectl get pv
but the actual files remain on disk under/var/lib/docker/volumes/minikube-m02/_data/hostpath-provisioner/...
Thus the old data can be unexpectedly resurrected when you redeploy a StatefulSet, for example.How to reproduce
What I expected
Those on-disk files should be wiped out when the corresponding PVs disappear from
kubectl get pv
output.Workaround
Delete the files manually or use a single-node minikube cluster.
Attach the log file
This is minikube version: v1.24.0 on Manjaro Linux. The cluster was created as
Operating System
Other
Driver
Docker