Open savpek opened 6 years ago
+1. I am also experiencing this issue.
However, I am pretty new to docker & Kubernetes.. @savpek Could you please explain your work around in a bit more detail so I can get up and testing in the mean time?
If I understand correctly, you are explicitly creating a PV that is linked to the PVC instead of letting it dynamically create one. Could you please share an example of how I can link these two?
Would be much appreciated, thanks!
This is ugly but i just create bunch of PV:s with enough space before i apply rest 😆
# This is workaround for https://github.com/docker/for-win/issues/1669#event-1492987545
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume1
labels:
type: local
spec:
storageClassName: hostpath
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/monitoring-volume1/"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume2
labels:
type: local
spec:
storageClassName: hostpath
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/monitoring-volume2/"
---
... ... ...
As better workaround i use:
[CmdletBinding()]
Param()
kubectl delete storageclass hostpath
# Setting up hostpath provisioner.
kubectl create -f https://raw.githubusercontent.com/MaZderMind/hostpath-provisioner/master/manifests/rbac.yaml
kubectl create -f https://raw.githubusercontent.com/MaZderMind/hostpath-provisioner/master/manifests/deployment.yaml
kubectl create -f https://raw.githubusercontent.com/MaZderMind/hostpath-provisioner/master/manifests/storageclass.yaml
Fixed in 18.05.0-ce-rc1-win63 (edge) and 18.03.1-ce-win64 (stable), thank you for your help!
Oups sorry, wrong issue…
Just to be clear, this is fixed in 18.05.0-ce-rc1-win63 (edge) but not in stable yet, leaving the issue open.
Running Docker Windows Edge 18.05.0-ce-win66 (17760) with Helm 2.8.2 on Windows 10 Pro Version 1803.
$ helm install stable/postgresql
Deployment fails with the following error:
FATAL: data directory '/var/lib/postgresql/data/pgdata' has wrong ownership
HINT: The server must be started by the user that owns the data directory.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
comment.
Stale issues will be closed after an additional 30d of inactivity.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle stale
/lifecycle frozen
But its still a problem.
/remove-lifecycle stale
Any progress on this? Still does not work with latest edge release.
I'm on a fresh install of Docker Desktop 2.0.0.3 with Engine 18.09.2 and this is definitely not fixed. No PVCs are fulfilled.
However, the workaround by @savpek above works.
Actually, I just noticed that Sharing the C
drive from Docker Desktop settings also works and the workaround isn't needed in this case.
As an example, running kubectl get pv --output=yaml
yields:
...
hostPath:
path: /host_mnt/c/Users/<my username>/.docker/Volumes/owdev-zookeeper-pvc-datalog/pvc-27a9525e-38f4-11e9-980f-00155d29ec10
...
This is not specified in any documentation. Perhaps a documentation update is in order.
I'm not sure when, but I think this has been fixed now. It looks like Docker Desktop's k8s support installs by-default a defult storageclass hostpath
with provisioner docker.io/hostpath
, plus a pod in the kube-system
namespace named storage-provisioner
which runs docker/desktop-storage-provisioner
:v1.0 and manages such mounts.
Compared to last time I tried it (around June 2019, with stable/mongodb) this now works:
helm --kube-context docker-desktop install mongo-test bitnami/mongodb
Previously it would fail to provision a volume, and attempting to map through a manual host-path provisioner into the Windows mount would fail due to bitnami/charts#827.
I am not sure how persistent that volume is though... I guess it would survive restarting Docker Desktop, (unless something clears the PVC database) but not a 'factory reset'. So it probably doesn't resolve the initial use-case of bitnami/charts#827 for example.
Solved for me using @TBBle's note. (Docker Desktop v19.03.8)
Using hostpath
as the storage class for a stateful set produces the following:
$ kubectl get sc
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 129d
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-06f535ce-b676-48c2-9c19-cc9e4cddef48 1Gi RWO Delete Bound framing/www-web-0 hostpath 34s
pvc-4c0018da-1ff6-437d-9fd9-6e4ced112899 1Gi RWO Delete Bound framing/www-web-2 hostpath 28s
pvc-c5e5a225-0adf-4acc-9143-96c392c6cb51 1Gi RWO Delete Bound framing/www-web-1 hostpath 31s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pvc-06f535ce-b676-48c2-9c19-cc9e4cddef48 1Gi RWO hostpath 7m4s
www-web-1 Bound pvc-c5e5a225-0adf-4acc-9143-96c392c6cb51 1Gi RWO hostpath 7m1s
www-web-2 Bound pvc-4c0018da-1ff6-437d-9fd9-6e4ced112899 1Gi RWO hostpath 6m58s
deployment.yaml
in @savpek 's answer does not work my version of docker since Deployment
was moved from extensions/v1beta1
to apps/v1
. It gives following error
error: unable to recognize "https://raw.githubusercontent.com/MaZderMind/hostpath-provisioner/master/manifests/deployment.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
Downloaded the file, changed the version, and tried again. did not fail this time.
Expected behavior
When kubernetes dynamically allocates hostpaths they should work.
Actual behavior
They don't work, consumer pods hang on invalid mode error.
Releated issue (however closed) https://github.com/docker/for-win/issues/1669, underlaying issue is same and i tested workarounds. I can manually create PersistentVolumes with hostpath /tmp/foobar and it works.
I don't have host machine (win10) disks shared to VM, i think atleast in this case volumes should be native unix but it still tries to point path on windows.
Created pv
Information
Somewhere i read that this should be fixed (?) in 18.03 version, however just updated to that version and issue still occurs.
Docker version:
Steps to reproduce the behavior
Follow releated issue https://github.com/docker/for-win/issues/1669
As workaround i currently create persistentVolumes from yaml files, however this differs between local and remote environments, which is bad since i'd like to keep configurations as much identical as possible.