Open Eelis opened 5 years ago
This sounds like a good idea! PR's welcome!
Since /data
has some things in it already, what would you think about another path, such as /pvc/<claim>
Ah, /pvc/<claim>
would be equally nice. :)
I thought the complaint was about the <claim>
rather than the /tmp/hostpath-provisioner
?
We already know that the default paths for hostpath provisioner isn't perfect, as noted in #3318
But I'm not sure about giving human names to a machine-managed resources, ripe for conflicts ? Seems better to have a UUID-style host directory, and then mostly worry about the mount point...
I thought the complaint was about the
<claim>
rather than the/tmp/hostpath-provisioner
? We already know that the default paths for hostpath provisioner isn't perfect, as noted in #3318
This feature request (not complaint!) is about both, because I did not know about #3318. :)
But I'm not sure about giving human names to a machine-managed resources, ripe for conflicts ?
For there to be a conflict, there would have to be two PVCs with the same name, which is already disallowed by kubernetes, right?
For there to be a conflict, there would have to be two PVCs with the same name, which is already disallowed by kubernetes, right?
Possibly. The whole hostpath provisioner desperately needs a rewrite anyway, so maybe this is a good feature to have in the new one
I could look into this, along with #3318. Did a quick look into the code, do you think it is enough to just change/parametrize the path (minikube-only change), or should we change how the PV name is created (change in sig-storage-lib-external-provisioner, affecting all implementations)?
From my point-of-view,It's probably good enough to fix this for minikube.
Would switching to https://github.com/rancher/local-path-provisioner help?
i will stick with mounting "/tmp/hostpath-provisioner" on that path and then run minikube start via a wrapper script until that is done
This issue is free to be picked up, as @11janci mentioned you could cherry pick form this PR https://github.com/kubernetes/minikube/pull/5400
/assign @nanikjava
Addons such as storage-provisioner is deployed as Docker image and it need to be generated. Following are the steps to use the newly generated Docker image:
eval $(minikube docker-env)
minikube addons disable storage-provisioner && minikube addons disable default-storageclass
docker rmi -f gcr.io/k8s-minikube/storage-provisioner:v1.8.1
make clean && make storage-provisioner-image && kubectl delete pod storage-provisioner --grace-period=0 --force --namespace kube-system
minikube addons enable storage-provisioner && minikube addons enable default-storageclass
There are primarily 2 different issues that needs to be fixed to make storage-provisioner works:
Docker image generated with the available /deploy/storage-provisioner/Dockerfile
does not work. Generated image kept on throwing standard_init_linux.go:211: exec user process caused "no such file or directory"
. Testing with different distro image (such as ubuntu-16) works, need to find out the most lean distro to make it work.
(2) RBAC outlined in storage-provisioner.tmpl.yaml
does not work as it is lacking the 'endpoints' cluster role permission. This can be overcome by adding it using the following command KUBE_EDITOR="nano" kubectl edit clusterrole system:persistent-volume-provisioner
and a new resources 'endpoints' with all the different verb.
Following are the commands used for troubleshooting
kubectl logs storage-provisioner -n kube-system
kubectl get events -n kube-system
minikube logs
kubectl describe pv && kubectl describe pvc
kubectl describe clusterrole.rbac 2>&1 | tee x.txt
The smallest generated workable image is using bitnami/minideb:jessie
and debian:slim
(less than 90MB) while using other distro (ubuntu, centos, etc) will be more than 100MB. Alpine distro does not work as it is throwing the exec error.
@nanikjava did you take into account what the others k8s images are using? as this would get shared. when i use for example ubuntu:16.04 as base for 8 images i use it only once
@nanikjava did you take into account what the others k8s images are using? as this would get shared. when i use for example ubuntu:16.04 as base for 8 images i use it only once
I did looked into the other Dockerfiles that are inside minikube and most of the images that it is using are ubuntu which is much bigger in terms of size. Opted out using the debian-base from k8 repository as the final image comes to less than 75MB
@nanikjava while i respect your effort of keeping a image small i think that is not the most importent case as docker images are layered if you depend on the same image and image version of other images it gets reused in var/lib/docker it will exist only once! so when 1 image uses ubuntu === 200mb 50 other images can depend on that and it will take only 200mb + size of the individual images. so lets say the 50 images all include a file with 1kb we get total size of 51 Images 200.05MB
PR #6156
As discussed in the review, the storage-provisioner was supposed to be built statically. We must have lost that feature along the way, leading to the issues with the "scratch" image.
i.e. first we reused LOCALKUBE_LDFLAGS for the storage provisioner 0fe440e58d6c4f1b3fca74bb2434b4d0732670dd
then we made localkube link dynamically dcb5c2cc5077d9963ce96833490cce73bd225feb. Oops. (-extldflags '-static'
)
If we wanted to link it dynamically, we could consider using alpine
rather than buster
:
docker.io/golang:1.13.4-alpine
docker.io/golang:1.13.4-buster
see https://hub.docker.com/_/golang
But for now, I think we will keep the static binary. @tstromberg fixed the linking, in bf0dc79bcd087ea5684b6827bf8de2db758c4e41
$ LANG=C ldd out/storage-provisioner
not a dynamic executable
That will work with the scratch
image.
It also doesn't affect the paths at all ?
In case you really wonder about size, here is a breakdown on relative sizes of the 27M binary:
https://gist.github.com/afbjorklund/7fc8583e4ff03e8dcc958662821e2083
Most of it comes from the dependencies, mainly github.com/golang/glog and k8s.io (with friends)
282 dependencies (54 internal, 228 external, 0 testing).
Can we get rid of the debian stuff in this one #6156, and merge it with #5400 please
@afbjorklund have modified the PR to use scratch
and also to build the storage-provisioner
as static. The PR includes the #5400 changes
More documentation on working with storage-provisioner
kubectl apply -f storage.yaml
manually
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: storage-provisioner
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: storage-provisioner labels: addonmanager.kubernetes.io/mode: EnsureExists roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects:
apiVersion: v1 kind: Pod metadata: name: storage-provisioner namespace: kube-system labels: integration-test: storage-provisioner addonmanager.kubernetes.io/mode: Reconcile spec: serviceAccountName: storage-provisioner hostNetwork: true containers:
* Sample pvc .yaml file from #3129
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test spec: accessModes:
use the following command to deploy kubectl create -f test.pvc.default.yaml
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
kindly help me to change the hosthpath provisioner (k8s.io/minikube-hostpath) in the minikube , Because my deployments are in the root directory due to that i am facing storage issue . i have to deploy my deployments in the local directory . kindly help me to mount on the /mnt
we have created our own pv and pvc even though it is mounting on our root directory kindly share your ideas regardin this issue .
Dynamic persistent volume provisioning works great, but uses paths like
which are not easy to remember and type. For development and testing, it would be much more convenient if the path could be customized to something along the lines of:
This would make a PVC named
mysqldb
resolve to an automatically generated PV that stores the data in/data/mysqldb
. Much easier to remember and type! :)