kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.16k stars 4.87k forks source link

Persistent Volume Claims with a subPath lead to a "no such file or directory" error #2256

Closed johnhamelink closed 6 years ago

johnhamelink commented 6 years ago

BUG REPORT

Please provide the following details:

Environment:

Minikube version (use minikube version): minikube version: v0.24.1

cat ~/.minikube/machines/minikube/config.json | grep -i ISO "Boot2DockerURL": "file:///home/john/.minikube/cache/iso/minikube-v0.23.6.iso"

minikube ssh cat /etc/VERSION v0.23.6

helm version 
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
VBoxManage --version
5.2.2r119230

What happened:

When attempting to install a pod resource, which has a VolumeMount with a subpath (like below):

 "volumeMounts": [
          {
            "name": "data",
            "mountPath": "/var/lib/postgresql/data/pgdata",
            "subPath": "postgresql-db"
          },
          {
            "name": "default-token-ctrw6",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ]

The pod fails to bind to the volume, with the following error:

PersistentVolumeClaim is not bound: "cranky-zebra-postgresql"
Error: lstat /tmp/hostpath-provisioner/pvc-f0db5074-d6b1-11e7-9596-080027aac058: no such file or directory
Error syncing pod

When subpath is not defined, this error does not happen.

What you expected to happen:

Creating a persisted volume claim with a subpath creates a directory which k8s can bind to.

How to reproduce it (as minimally and precisely as possible):

helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install stable/postgresql

Output of minikube logs (if applicable):

https://gist.github.com/johnhamelink/f8c3074d35ccb55f1203a4fa021b0cbb

Anything else do we need to know:

I was able to confirm that this issue didn't affect a macbook pro with the following versions:

MacBook-Pro:api icmobilelab$ helm version
Client: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}

MacBook-Pro:api icmobilelab$ minikube version
minikube version: v0.23.0

Virtualbox 5.1.22

I was able to get past this issue by manually creating the missing directory from iniside minikube by running minikube ssh.

johnhamelink commented 6 years ago

I think I've figured out what the issue is - I think that by default the /tmp/hostpath-provisioner directory has the wrong ownership - changing the ownership to docker:docker seems to fix things for me!

brandon-bethke-timu commented 6 years ago

We can confirm there is an issue with persistent volume claims in minikube 0.24.1. We encountered the issue described after upgrading and attempting to deploy the concourse helm chart. This issue did not occur in minikube 0.23

brosander commented 6 years ago

Hitting this with kafka chart

southwolf commented 6 years ago

"lstat" does not exist on Ubuntu 16.04.3 LTS. I used sudo ln -s /usr/bin/stat /usr/bin/lstat but didn't help.

@johnhamelink I used --vm-driver=none, but chmod -R 777 /tmp/hostpath-provisioner didn't help either.

johnhamelink commented 6 years ago

@southwolf I'm using minikube inside Virtualbox (I'm running arch and --vm-driver=none was a headache I wasn't willing to work my way through just yet, lol).

To clarify, when I see the lstat error, I'm running mkdir -p <directory> then chown docker:docker /tmp/hostpath-provisioner.

grimmy commented 6 years ago

I'm hitting this as well. The interesting part is that I didn't hit it yesterday, but today it's hitting me. I'm using helm to install the stable/postgresql chart and that worked yesterday, but today I'm getting this error. I was able to verify earlier today that the volume existed in /tmp/hostpath-provisioner but the sub paths were not being created.

I tore down my vm with minikube delete and now nothing is being created in /tmp/hostpath-provisioner. I then chmod 777 /tmp/hostpath* reinstalled the chart and no go.

As a last ditch effort, I nuked my ~/.minikube and still seeing the issue.

southwolf commented 6 years ago

@grimmy Exactly the same here.

killerham commented 6 years ago

@grimmy Ran into this with postgres as well on 0.24.1

southwolf commented 6 years ago

Any update on this bug?

tarifrr commented 6 years ago

Is there going to be a release anytime soon with this patch?

javajon commented 6 years ago

@grimmy Exactly the same here with Minikube 0.24.1

Error: lstat /tmp/hostpath-provisioner/pvc-6c84aa91-f04f-11e7-bf07-08002736d1ee: no such file or directory

I get this after "helm install stable/sonarqube" which also installs stable/postgresql

southwolf commented 6 years ago

I tried editing YAML in minikube using this PR and it seems working.

Just minikube ssh and replace /etc/kubernetes/addons/storage-provisioner.yaml using this file, restart minikube. You're good to go!

dyson commented 6 years ago

@southwolf when I follow your instructions the change doesn't persist over the restart. Is there anything else you did or something I am obviously missing?

tarifrr commented 6 years ago

@dyson same here . tried doing kubectl edit, doesn't work either

tarifrr commented 6 years ago

Found the solution. @southwolf @dyson . Delete the storage-provisioner.yaml file from the Minikube VM and delete the pod associated with the file: kubectl delete pods/storage-provisioner -n kube-system. And then insert the file into /etc/kubernetes/addons/. The storage-provisioner pod should restart by itself

subvind commented 6 years ago

@tarifrr I tried that and I'm still getting the error...

PersistentVolumeClaim is not bound: "fleetgrid-postgresql"
Error: lstat /tmp/hostpath-provisioner/pvc-afe84cbc-f308-11e7-b6ad-0800270b980e: no such file or directory
Error syncing pod
tarifrr commented 6 years ago

@trabur Could you tell me the steps you took?

torstenek commented 6 years ago

Hitting a similar issue after trying the suggestion. No volumes created. My steps:

Remove the config file and kill the provisioner pod

minikube$ sudo rm /etc/kubernetes/addons/storage-provisioner.yaml 
host$ kubectl delete pods/storage-provisioner -n kube-system

Ensure the pod has terminated

host$ kubectl get pods/storage-provisioner -n kube-system

Replace the provisioner config and install the chart

minikube$ sudo curl  https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/storage-provisioner/storage-provisioner.yaml --output /etc/kubernetes/addons/storage-provisioner.yaml
host$ helm install stable/postgresql

Error reported

PersistentVolumeClaim is not bound: "sweet-goat-postgresql"
Unable to mount volumes for pod "sweet-goat-postgresql-764d89f465-f7fr2_default(4f0efe66-f460-11e7-b5f9-080027e117f4)": timeout expired waiting for volumes to attach/mount for pod "default"/"sweet-goat-postgresql-764d89f465-f7fr2". list of unattached/unmounted volumes=[data]
Error syncing pod

Check volumes and claims

host$ kubectl get pvc
NAME                       STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
sweet-goat-postgresql   Pending                                                      16m
host$ kubectl get pvc
No resources found.
tarifrr commented 6 years ago

@torstenek check whether the storage-provisioner pod is created first, if so then add in postgresql

RobertDiebels commented 6 years ago

Hello everyone!

I ran into the same issue. However I found that the problem was caused by configuration defaults. See this description.

Cause I had created a PersistentVolumeClaim without a storageClassName. Kubernetes then added a DefaultStorageClass named standard to the claim.

The PersistentVolumes I had created did not have a storageClassName either. However those are not assigned a default. The storageClassName in this case is equal to "" or none.

As a result the claim could not find a matching PersistentVolume. Then Kubernetes created a new PersistentVolume with a hostPath similar to /tmp/hostpath-provisioner/pvc-name. This directory did not exist hence the lstat error.

Solution Adding a storageClassName to both the PersistentVolume and PersistentVolume spec solved the issue for me.

Hope this helps someone.

-Robert.

EDIT: The kubernetes.io page on persistent volumes also helped me to find the paths minikube allows for persistent volumes.

atali commented 6 years ago

I followed the stackoverflow answer and it works

https://stackoverflow.com/questions/47849975/postgresql-via-helm-not-installing/48291156#48291156

krancour commented 6 years ago

This issue was fixed a while ago. When might we reasonably see a patch release of minikube? This is affecting a lot of people.

r2d4 commented 6 years ago

This has been released now in v0.25.0

krancour commented 6 years ago

@r2d4 awesome! Thank you!

joshkendrick commented 6 years ago

@RobertDiebels thanks for your answer, i had to define a storageclass and use that in both my pvs and pvcs as well -- using minikube and minio. for anyone else that gets here, i also didnt have selectors in the pvc matching the labels in the pv correctly.

louygan commented 6 years ago

storage-provisioner is working in a fresh installed minikube.

but after days, all new created pvc is always pending. found storage-provisoner pod is missing.

Question: why the strorage-provisioner is started as pod only? no deployment or replicaset to maintain the replica of it.

RobertDiebels commented 6 years ago

@joshkendrick Happy to help 👍 . The hostpath-provisioner fix should sort out the issue though. I manually created host-path PV's before. Adding the proper storageclass for provisioning should allow PVC's to work without creating your own PV's now.

docktermj commented 4 years ago

Does seem similar to https://github.com/kubernetes/minikube/issues/4634