Open geku opened 5 years ago
Hey, thanks for creating this issue and providing your solution :+1:
I confirmed that the provisioner you posted is working, by saving it to prov.yaml
and then creating it upon creation time by mounting it like this: k3d create -v $(pwd)/prov.yaml:/var/lib/rancher/k3s/server/manifests/prov.yaml
.
However, one should note that the hostPath
/tmp
is not persisted on disk when we shutdown the cluster, if we don't declare it as a docker volume/bind.
I am using https://github.com/rancher/local-path-provisioner for this at IQSS/dataverse-kubernetes k3s/k3d demo
Obviously, you need to patch your PVCs. I did that via Kustomization (kubectl apply -k
).
I also just tested the Local Persistent Volumes which are GA since Kubernetes v1.14. Here's what I did:
k3d create -n test --workers 2
1.2 export KUBECONFIG="$(k3d get-kubeconfig --name='test')"
kubectl apply -f storageclass.yaml
where storageclass.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
docker exec k3d-test-worker-0 mkdir /tmp/test-pv
(the path we're accessing has to exist)kubectl apply -f deploy.yaml
, where deploy.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-deploy
labels:
app: test-deploy
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: test-deploy
template:
metadata:
name: test-deploy
labels:
app: test-deploy
spec:
containers:
- name: main
image: postgres
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /test
name: test-mount
volumes:
- name: test-mount
persistentVolumeClaim:
claimName: test-mount
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-mount
spec:
volumeName: example-pv
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: capacity: storage: 1Gi
volumeMode: Filesystem accessModes:
5. `kubectl exec test-deploy-76cbfc4c94-v2q8s touch /test/test.txt`
6. `docker exec k3d-test-worker-0 ls /tmp/test-pv`
This was just a test, to see that the thing is working :+1:
Cool @iwilltry42! :+1:
Let me emphasize a bit on an IMHO important aspect: local storage provider in K8s 1.14 is promising, but (as you noted) does not yet support dynamic provisioning.
This is my only reason to favor rancher/local-path-provisioner over K8s local storage. Following kubernetes-sigs/sig-storage-local-static-provisioner#51, it's going to take a while...
k3s built-in local storage provider coming: https://twitter.com/ibuildthecloud/status/1167511203108638720
k8s local dynamic provisioning issue: https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/issues/51
k3s has the local-storage
storage class built in now: https://rancher.com/docs/k3s/latest/en/storage/
@iwilltry42 I noticed that example (https://rancher.com/docs/k3s/latest/en/storage/#pvc-yaml) uses capacity, but according to the docs (https://github.com/rancher/local-path-provisioner#cons), its not possible yet. EDIT: sry, its request, not limit :)
k3s has the local-storage storage class built in now: https://rancher.com/docs/k3s/latest/en/storage/
That seems to work well with k3d create --volume "<local-path>:/var/lib/rancher/k3s/storage"
for persistence. The only issue I'm seeing is that k3d delete
doesn't allow the local path provisioner to clean up existing PVC folders and they don't seem to be cleaned up when creating a new cluster using the same storage folder either.
Hi @deiwin, k3d delete
doesn't do any kind of "graceful" shutdown of the managed k3s instances, it simply removes the containers. What would be required to allow for a proper cleanup and what would be the use case for this?
What would be required to allow for a proper cleanup
If it'd delete all deployments/statefulsets/daemonsets using PVCs and then all PVCs, then I think local-path-provisioner
would do the cleanup. It'd have to have some way of knowing that that cleanup's done, though.
what would be the use case for this?
You write above, that "However, one should note that the hostPath /tmp
is not persisted on disk when we shutdown the cluster, if we don't declare it as a docker volume/bind." I haven't verified it, but I was thinking that doing k3d create --volume "<local-path>:/var/lib/rancher/k3s/storage"
as I mentioned above would help with that persistence.
I'm working on a cluster setup that supports two different use cases: 1) start and stop the same long-running cluster with persistent storage (for a DB) and 2) recreate whole clusters from scratch fairly often.
As I said, I haven't verified this, but I think case (1) requires persistence, but case (2) currently leaves behind data from PVCs that don't exist anymore in new clusters.
As a workaround, I'm currently pairing k3d delete
with rm -rf <local-path-for-storage>
. That's fairly simple, but I don't know if other users would think to do that.
Hey :wave: Is there any more need or input for this feature?
What I am missing at least in the documentation is where the data is stored locally. does docker map the /var/lib/rancher/k3s/storage to /tmp ? Here it is documented to point to opt: https://github.com/rancher/local-path-provisioner/blob/master/README.md#usage A clarification / documentation of this is greatly appreciated, as is the option to add a PV to point to a local folder for storage in an existing k3d cluster.
I also think clean up is not needed from k3d when it comes to a mapped local file. a PV could be mapped to multiple clusters and otherwise the cleanup logic has to check for all that. The pvc could be cleaned, but doesn't that just get removed when the whole cluster is removed? I noticed one can add a volume but not a mapping to a file system after the cluster is created (unless I missed this and should run a docker command). If that can be considered a feature of k3d then there could be a delete option too, which would do what @deiwin suggest in a more or less complete way. just my thoughts :)
Hi @vincentgerris , thanks for your input!
The local-path-provisioner
is a "feature" (as in auto-deployed service) of K3s, so not directly related to k3d. k3d also does not manage anything about it.
You can find the configuration of the provisioner here: https://github.com/k3s-io/k3s/blob/master/manifests/local-storage.yaml and you can edit it e.g. as mentioned here: https://github.com/rancher/k3d/discussions/787#discussioncomment-1459393 .
In k3d, the path generated by K3s is /var/lib/rancher/k3s/storage
and so you can use k3d cluster create -v /my/local/path:/var/lib/rancher/k3s/storage
to map it to some directory on your host.
I noticed one can add a volume but not a mapping to a file system after the cluster is created (unless I missed this and should run a docker command).
Docker does not have any option of adding mounts to existing containers, so there's no possibility for us to achieve this in k3d. You'd have to do the volume mounts upfront when creating the cluster.
Update 1: I created an issue to put up some documentation on K3s features and how to use/modify them in k3d: https://github.com/rancher/k3d/issues/795
It would be great to have a default storage provider similar to what Minikube provides. This allows to deploy and develop Kubernetes pods requiring storage.
Scope of your request
Additional addon to deploy to single node clusters.
Describe the solution you'd like
I got it working by using the storage provisioner of Minikube by creating following resources:
Describe alternatives you've considered
An alternative might be local persistent volumes but the Minikube solution looks simpler. With the local persistent volumes it could work even with multiple nodes.
I'm not sure if the addon should be integrated into k3s directly. On the other hand I think it's more a feature required for local development and therefore probably fits better to k3d.