Open bahtou opened 4 years ago
Virtualbox on macOS should automatically mount /Users, I verified that it still works as intended. Any changes toΒ /Users in the VM should get persisted to the host machine.
@bahtou do you still have this issue ? with latest version of minikube ? do you mind re-verifying or provide me exact steps so I could reproduce this ? (maybe the commands that you run that shows the data is not presistant)
if we can have a reproduce-able way we could and we should make an integration test for this, so we test this on every PR.
@sharifelgamal I agree with your comment. But for some reason postgres data does not persist to host. And I'm not sure if this is a permissions issue or minikube. @medyagh still have this issue. I will get you the info requested.
@bahtou thank you, do you mind also try if you explicitly mount the folder you want either using --mount-string option to the start or using minikube mount command and see if you still have that problem?
minikube start --help | grep mount-string
--mount-string='/Users:/minikube-host': The argument to pass the minikube mount command on start
minikube version: v1.11.0
commit: 57e2f55f47effe9ce396cea42a1e0eb4f611ebbd
minikube start --driver=virtualbox --cpus=2 --memory=5120 --kubernetes-version=v1.18.3 --container-runtime=docker --mount=true --mount-string=/Users/<>/minikube/pgdata:/data
// output
π [test-host] minikube v1.11.0 on Darwin 10.15.4
β¨ Using the virtualbox driver based on user configuration
π Starting control plane node test-host in cluster test-host
π₯ Creating virtualbox VM (CPUs=2, Memory=5120MB, Disk=20000MB) ...
π³ Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
π Verifying Kubernetes components...
π Creating mount /Users/<>/minikube/pgdata:/data ...
π Enabled addons: default-storageclass, storage-provisioner
π Done! kubectl is now configured to use "test-host"
From the above mount we have connected host <--> vm via the --mount-string
create a file on your local host names postgres-pod.yaml
and copy/paste the following configuration into the file:
---
apiVersion: v1
kind: Pod
metadata:
name: pg-pod
labels:
name: postgres
spec:
containers:
- name: postgres
image: postgres:12.3
imagePullPolicy: IfNotPresent
ports:
- name: pg-port
containerPort: 5432
env:
- name: POSTGRES_PASSWORD
value: admin
- name: PGDATA
value: /data/k8s
volumeMounts:
- name: pg-vol
mountPath: /var/lib/postgresql/data
securityContext:
runAsUser: 0
runAsGroup: 0
volumes:
- name: pg-vol
hostPath:
path: /data
# path: /Users/<>/minikube/pgdata
restartPolicy: Never
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
Then run
kubectl apply -f postgres-pod.yaml
Once the pod is up and running go into it and verify that the data is persisted at /data/k8s
$ kubectl exec -it pg-pod -- /bin/bash
root@pg-pod:/# ls /data/k8s
// you should get a list of postgres files
Now check that the files are showing up in the minikube vm
$ minikube ssh
$ ls -al /data
// nothing shows up in the vm
Go ahead and uncomment the second path to see the same results.
@bahtou I think there was a typo in your yaml I was able to have a successful data
by changin
mountPath: /var/lib/postgresql/data
to
mountPath: /data
I think the problem was you are mounting the folder into the wrong path into your pod
see details:
medya@~/t1 $ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ ls /data
k8s
$ ls /data/k8s/
ls: cannot open directory '/data/k8s/': Permission denied
$ sudo ls /data/k8s/
PG_VERSION pg_dynshmem pg_multixact pg_snapshots pg_tblspc postgresql.auto.conf
base pg_hba.conf pg_notify pg_stat pg_twophase postgresql.conf
global pg_ident.conf pg_replslot pg_stat_tmp pg_wal postmaster.opts
pg_commit_ts pg_logical pg_serial pg_subtrans pg_xact postmaster.pid
@bahtou if you please confirm that was the typo we could close this issue ?
btw thank you for the great step by step repoducable detail, we could make an integraiton test out of this !
@medyagh yes sorry typo on my part. You are correct and the data does persist to the minikube vm. What is missing is the persistence to the host machine: host <--> vm <--> container. The above persists only to the vm. I'm not able to view the files on the host machine even though it has been mounted.
@bahtou thank you for the clearification ! this indeed does look like a bug ! and we should fix it soon ! I tried manually with both hyperkit and docker driver
minikube start --driver=hyperkit --mount-string=/Users/medya/t1/pgdata:/data
and then inside minikube ssh
$ echo "hello world" > /data/just_test.txt
$ ls /data/
just_test.txt k8s
outside minikube:
ls /Users/medya/t1/pgdata
is empty !!!
this is a bug ! and we need to fix this ! thank you brining this to our attention
@bahtou I wonder if "minikube mount" command would fix your problem? that would require terminal to be open while the host folder is mounted
I open another terminal and run this:
minikube mount /Users/<>/minikube/pgdata:/data
// output
π Mounting host path /Users/<>/minikube/pgdata into VM as /data ...
βͺ Mount type: <no value>
βͺ User ID: docker
βͺ Group ID: docker
βͺ Version: 9p2000.L
βͺ Message Size: 262144
βͺ Permissions: 755 (-rwxr-xr-x)
βͺ Options: map[]
βͺ Bind Address: 192.168.99.1:62273
π Userspace file server: ufs starting
β
Successfully mounted /Users/<>/minikube/pgdata to /data
π NOTE: This process must stay alive for the mount to be accessible ...
In another terminal run the manifest:
kubectl apply -f postgres-pod.yaml
kubectl get pod
NAME READY STATUS RESTARTS AGE
pg-pod 0/1 Error 0 40s
kubectl logs pg-pod
mkdir: cannot create directory β/dataβ: Permission denied
@medyagh This could also be a postgres permissions issue? Something to keep in mind.
Also, just to confirm. By running this:
minikube mount /Users/<>/minikube/pgdata:/data
// output
π Mounting host path /Users/<>/minikube/pgdata into VM as /data ...
βͺ Mount type: <no value>
βͺ User ID: docker
βͺ Group ID: docker
βͺ Version: 9p2000.L
βͺ Message Size: 262144
βͺ Permissions: 755 (-rwxr-xr-x)
βͺ Options: map[]
βͺ Bind Address: 192.168.99.1:62273
π Userspace file server: ufs starting
β
Successfully mounted /Users/<>/minikube/pgdata to /data
π NOTE: This process must stay alive for the mount to be accessible ...
and then going into another terminal and
$ minikube ssh
$ echo 'something' > /data/myfile.txt
Checking locally I see the file persist here /Users/<>/minikube/pgdata
@bahtou I think you are right ! this could be a permission you might need to either make a service account for the pod that has access to the hostpath
I verfied without postgress the mount works fine
medya@~/t1 $ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ cd /data/
$ echo "hello" > /data/hello.txt
$ ls /data/
hello.txt
$ exit
logout
medya@~/t1 $ ls test/
hello.txt
we are doing a terrible job in minikube in providing a good tutorial for storage provsioner, given that it is our v1.13.0 milestone to make storage provsioner less buggy and also provide better tutorials that helps
does this link help ? https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
@bahtou once we figure out the issue you created
I would say lets make this example and make a good tutorial out of it for minikube website so others dont go through this pain
@bahtou does this help ?
try adding this to your pod
securityContext:
runAsUser: 0
@medyagh actually, I was writing a tutorial on using postgres + kubernetes when I encountered this issue with minikube. I began with PersistentVolume and hostPath, and then just did a simple pod with hostPath which lead to me posting here. Wasn't sure if it was a postgres permissions or pod/container or minikube. Unfortunately, setting runAsUser: 0
does not resolve the permissions error from above.
@bahtou does the comment in this issue help https://github.com/kubernetes/minikube/issues/7828#issuecomment-661831907\
cc: @priyawadhwa
@bahtou have you tried recently we had an update on storage provisioner ? do you mind sharing with us if you gave up on minikube or found a solution ?
@medyagh I'll take a look tonight.
Have this issue too when running something like:
minikube start --driver=hyperkit --mount-string=/Users/medya/t1/pgdata:/data
However, checked some other issues, when people are using --mount-string they always come with another --mount flag like:
minikube start --driver=hyperkit --mount-string=/Users/medya/t1/pgdata:/data
And it solved my issue, I can see my file changes in both minikube vm and my macos.
However, I'm facing new issue with the --mount flag. Using helm with jenkins with pvc
kubelet, minikube Readiness probe failed: Get http://172.17.0.8:8080/login: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
@medyagh
minikube version: v1.12.3
commit: 2243b4b97c131e3244c5f014faedca0d846599f5
> minikube start -p test-mount --mount-string=/Users/medya/minikube/pgdata:/data --mount=true
> minikube ssh -p test-mount
$ echo "hello world" > /data/just_test.txt
$ ls /data/
just_test.txt
works in the VM.
Nothing on the outside(host)
ls /Users/medya/minikube/pgdata
@medyagh I'm running into the same issue on Minikube v1.16.0. Mounting the folder in a separate directory using minikube mount. I'm using a local persistant volume as described in the kubernetes docs. Any further recommendations?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
Hello,
Any news on this issue?
I have been facing it using Mac OS and MiniKube with hyperkit driver. Either I get a permission error or by manipulating the PGDATA variable I can make the pod run but files are simply not persisted on the host path.
The only thing I need is to share a folder from the minikube vm with the host machine (minikube mount) and then use that sane folder as mount of /var/lib/postgresql/data on postgresql pod so that pod whiites on that vm folder and this is reflected on the host folder, keeping the data ammong minikube runs.
I have tried every possible thing I could think of, but still no results.
I guess I am facing this same issue with virtualbox as driver. The image I am trying to deploy sonatype:nxus3 operates with uid and gid 200.
mkdir -p /tmp/storage/nexus/blobs && ls -n --time-style=+"" /tmp/ | grep storage
(my user on Ubuntu 20.04 LTS has uid and gid equals to 1000)
drwxrwxr-x 3 1000 1000 4096 storage
minikube --profile test-mount start --driver=virtualbox --disk-size=15g --disable-driver-mounts --mount-string="/tmp/storage/:/storage" --mount
π [test-mount] minikube v1.24.0 on Ubuntu 20.04
β¨ Using the virtualbox driver based on user configuration
π Starting control plane node test-mount in cluster test-mount
π₯ Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=15360MB) ...
π³ Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
βͺ Generating certificates and keys ...
βͺ Booting up control plane ...
βͺ Configuring RBAC rules ...
π Creating mount /tmp/storage/:/storage ...
π Verifying Kubernetes components...
βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
π Enabled addons: default-storageclass, storage-provisioner
π Done! kubectl is now configured to use "test-mount" cluster and "default" namespace by default
minikube profile test-mount
(to simplify the commands)minikube ssh -- ls -n --time-style=+"" / /storage | grep "data\|storage\|nexus"
drwxr-xr-x 2 0 0 4096 data
drwxrwxr-x 1 1000 1000 4096 storage
/storage:
drwxrwxr-x 1 1000 1000 4096 nexus
minikube ssh -- id
(docker uid/gid on the VM is 1000)
uid=1000(docker) gid=1000(docker) groups=1000(docker),10(wheel),1011(buildkit),1016(podman),1017(vboxsf)
kubectl create -f nexus.yaml.txt
namespace/nexus created
persistentvolume/nexus-data-pv created
persistentvolume/nexus-storage-pv created
persistentvolumeclaim/nexus-data-pvc created
persistentvolumeclaim/nexus-storage-pvc created
deployment.apps/nexus created
service/nexus-service created
kubectl -n nexus get all
(checking the deployment)
NAME READY STATUS RESTARTS AGE
pod/nexus-66f4ffdb9f-zxqhb 1/1 Running 0 106s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nexus-service NodePort 10.110.81.115
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nexus 1/1 1 1 106s
NAME DESIRED CURRENT READY AGE replicaset.apps/nexus-66f4ffdb9f 1 1 1 106s
7. `kubectl -n nexus exec --stdin --tty nexus-66f4ffdb9f-zxqhb -- ls -n --time-style=+"" / | grep nexus` (the init pod fix-data-pvc worked, the uid and gid are 200)
```shell
Defaulted container "nexus" out of: nexus, fix-data-pvc (init), fix-storage-pvc (init)
drwxr-xr-x 15 200 200 4096 nexus-data
kubectl -n nexus exec --stdin --tty nexus-66f4ffdb9f-zxqhb -- ls -n --time-style=+"" /media/
(but the fix-storage-pvc didn't)
Defaulted container "nexus" out of: nexus, fix-data-pvc (init), fix-storage-pvc (init)
total 4
drwxrwxr-x 1 1000 1000 4096 nexus-storage
I also tried to use fsGroup, but didn't work either. When the application tries to write data on /media/nexus-storage
it gets a error, because that application user can't write on that location.
nexus.yaml.txt
Hi all, any news about this issue? I'm using latest minikube, kubernetes, mac, docker, and I have the same issue, it seems to be related with the postgres user and the local mounted folder:
Postgres Container Log: chmod: /var/lib/postgresql/data: Operation not permitted The files belonging to this database system will be owned by user "postgres". This user must also own the server process. The database cluster will be initialized with locale "en_US.utf8". The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "english". Data page checksums are disabled. initdb: error: could not access directory "/var/lib/postgresql/data": Permission denied
If I configure the - mountPath: "/var/lib/postgresql" instead of - mountPath: "/var/lib/postgresql/data", the permissions error goes away, but the host .../data folder is created empty, and the real data is inside the container. So, after reboot the data is lost.
I'm also using minikube mount /Users/[mylocalpath]/data/:/data , and it's visible using "minikube ssh" command, and also it's enabled in Docker "Shared folders" config (/Users).
What I need is to store my Postgres PV in a local Mac folder and keep it persistent even after a minikube stop/start (or notebook reboots)
BTW, I'm mounting another folder with the same approach as pv + pvc, etc, for Geoserver app, and it works as expected keeping the data after a minikube stop/start, and the data is written in the correct local MAC path.
I think that the Postgres container cannot write in the local folder even with chmod 777, caused by it's using the postgres user from the container.
Any idea in how to solve the PV problem?
Thank you
The files stored under /data
on the minikube node are supposed to be persisted, even if not shared with the host filesystem.
The same goes for the hostpath-provisioner, if using PV. Currently there is some confusion about /tmp mountpoints vs. storage.
ohhh yes!, you are right, thanks to your comment I finally did it! Thank you!!
My notes to help others: 1) Mount a local folder in your Mac host is NOT necessary (minikube mount xx:xx) 2) If you go to minikube ssh, and look at /data you will see your data there and it survives to minikube stop/start and event MacBook reboots. 3) My concrete example for Postgis/Postgresql official image.
apiVersion: v1 kind: PersistentVolume metadata: name: postgis-pv spec: accessModes:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgis-pvc spec: accessModes:
apiVersion: apps/v1 kind: Deployment ... spec: containers: ... volumeMounts:
Hope it helps other people :) Thank you again
I would like to add some more info I dug up about this issue.
I guess the problem root is in that fact that PostgreSQL tries to change data folder permission. It want to make the folder accessible only to postgres
user. We can find it running ls <data folder path> -l
in Minikube VM. For me I get this:
docker@minikube:~$ ls /etc/minikube/ -l
total 4
drwx------ 19 70 root 4096 Aug 2 18:16 pgsql
kubectl exec db-pg-0 -- getent passwd
...
postgres:x:70:70:Linux User,,,:/var/lib/postgresql:/bin/sh
You can see that only postgres
user (UID=70) can work with the pgsql
folder.
But when I first mount folder via minikube mount <host_path>:<minikube_vm_path>
then run ls <data folder path> -l
in Minikube VM I get this:
docker@minikube:~$ ls /etc/minikube/ -l
total 8
drwxrwxrwx 1 70 root 8192 Aug 2 18:17 pgsql
So Minikube makes mounted folder available to all users. When I then run PostgreSQL pod I get this logs:
2022-08-02 18:17:18.441 UTC [51] FATAL: data directory "/var/lib/postgresql/data" has invalid permissions
2022-08-02 18:17:18.441 UTC [51] DETAIL: Permissions should be u=rwx (0700) or u=rwx,g=rx (0750).
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
running bootstrap script ...
From the other side, PostreSQL tries to make the data folder available to postgres
user only again. And fails, obviously.
I would say this is not a bug of Minikube
or PostgreSQL themselves. It's about different approaches to working with and different needs in permissions.
I guess there we have two ways. First, setup PostgreSQL in some way that it doesn't change permissions of data folder. It's bad practice from security point but could be useful for development purpose (just like Minikube
and hostPath
persistent volume type themselves). Second, to add to Minikube some way of configuring permissions that it gives to mounted folders. Because for now it gives all permissions to all users as I can see. Having this feature we could set drwx------
for PostgreSQL data folder so the DB doesn't notice anything.
If you want to reproduce my experiments, I'll attach manifests.
And here is Minikube mount command I use (I'm on Windows but the issue is the same as on Mac):
minikube mount %MINIKUBE_MOUNTED%:/etc/minikube --uid 70 --gid 0
Manifests:
# Source: db-postgres/templates/storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: db-pg
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Retain
volumeBindingMode: Immediate
---
# Source: db-postgres/templates/persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-db-pg-0
spec:
storageClassName: db-pg
capacity:
storage: 256Mi
accessModes:
- ReadWriteOnce
hostPath:
path: /etc/minikube/pgsql
type: DirectoryOrCreate
---
# Source: db-postgres/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: db-pg
spec:
type: NodePort
selector:
helm.sh/chart: db-postgres-1.0.0
ports:
- name: tcp
port: 5432
targetPort: tcp
nodePort: 32345
---
# Source: db-postgres/templates/stateful-set.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db-pg
spec:
serviceName: db-pg
replicas: 1
selector:
matchLabels:
helm.sh/chart: db-postgres-1.0.0
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
app.kubernetes.io/name: db-postgres
app.kubernetes.io/component: database
app.kubernetes.io/instance: stateful-set
app.kubernetes.io/version: 14.4.0
app.kubernetes.io/managed-by: Helm
helm.sh/chart: db-postgres-1.0.0
spec:
containers:
- name: postgres
image: postgres:14.4-alpine
ports:
- name: tcp
protocol: TCP
containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: db-pg-secret
key: login
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: db-pg-secret
key: password
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: db-pg
resources:
requests:
storage: 128Mi
accessModes:
- ReadWriteOnce
Ok, I give up. I've tried a lot of different approaches. And the only thing I found is that minikube doesn't allow to change permissions of volumes mounted by it.
Folder mounted by minikube and all its subfolder get 0777 permissions when PostgreSQL expect 0700 or 0750. I've tried to change permissions of these folders in many different ways. sudo chmod -R 750 /etc/minikube
on minikube vm tells me 'permission changed'. But when I then run ls -l /etc/minikube
I watch the same 0777 permissions again. So they aren't changed in fact. This is behavior of minikube. I'm not sure it is bug or feature.
Also I haven't found a way to make PostgreSQL to not change permissions of the data folder.
My conclusion: for now there is no way to mound PostgreSQL data folder to host machine because of minikube volume mount behavior. At least I haven't found a solution.
Steps to reproduce the issue:
minikube start --driver=virtualbox --network-plugin=cni --memory=5120 --kubernetes-version=v1.18.3 --container-runtime=docker
kubectl apply -f postgres-pod.yaml
I am trying to persist the data to the host machine and not the minikube vm. The above manifest works and no errors are reported but the data files are not persisted to the host machine, but they are in the minikube vm.
I have been fidgeting with the security context and the above configuration doesn't throw βpermission deniedβ errors. How to pass through the data files onto the host machine?
host machine <---> minikube vm <---> pod/container
My understanding is that
/Users
should reflect in both minikube vm and host machine [https://minikube.sigs.k8s.io/docs/handbook/mount/]