k3d-io / k3d

Little helper to run CNCF's k3s in Docker
https://k3d.io/
MIT License
5.44k stars 462 forks source link

[Feature] Local Storage Provider #67

Open geku opened 5 years ago

geku commented 5 years ago

It would be great to have a default storage provider similar to what Minikube provides. This allows to deploy and develop Kubernetes pods requiring storage.

Scope of your request

Additional addon to deploy to single node clusters.

Describe the solution you'd like

I got it working by using the storage provisioner of Minikube by creating following resources:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: storage-provisioner
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: storage-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:persistent-volume-provisioner
subjects:
  - kind: ServiceAccount
    name: storage-provisioner
    namespace: kube-system
---
apiVersion: v1
kind: Pod
metadata:
  name: storage-provisioner
  namespace: kube-system
spec:
  serviceAccountName: storage-provisioner
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  hostNetwork: true
  containers:
  - name: storage-provisioner
    image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
    command: ["/storage-provisioner"]
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - mountPath: /tmp
      name: tmp
  volumes:
  - name: tmp
    hostPath:
      path: /tmp
      type: Directory
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
  namespace: kube-system
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
provisioner: k8s.io/minikube-hostpath

Describe alternatives you've considered

An alternative might be local persistent volumes but the Minikube solution looks simpler. With the local persistent volumes it could work even with multiple nodes.

I'm not sure if the addon should be integrated into k3s directly. On the other hand I think it's more a feature required for local development and therefore probably fits better to k3d.

iwilltry42 commented 5 years ago

Hey, thanks for creating this issue and providing your solution :+1: I confirmed that the provisioner you posted is working, by saving it to prov.yaml and then creating it upon creation time by mounting it like this: k3d create -v $(pwd)/prov.yaml:/var/lib/rancher/k3s/server/manifests/prov.yaml. However, one should note that the hostPath /tmp is not persisted on disk when we shutdown the cluster, if we don't declare it as a docker volume/bind.

poikilotherm commented 5 years ago

I am using https://github.com/rancher/local-path-provisioner for this at IQSS/dataverse-kubernetes k3s/k3d demo

Obviously, you need to patch your PVCs. I did that via Kustomization (kubectl apply -k).

iwilltry42 commented 5 years ago

I also just tested the Local Persistent Volumes which are GA since Kubernetes v1.14. Here's what I did:

  1. k3d create -n test --workers 2 1.2 export KUBECONFIG="$(k3d get-kubeconfig --name='test')"
  2. kubectl apply -f storageclass.yaml where storageclass.yaml:
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
    name: local-storage
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: WaitForFirstConsumer
  3. docker exec k3d-test-worker-0 mkdir /tmp/test-pv (the path we're accessing has to exist)
  4. kubectl apply -f deploy.yaml, where deploy.yaml:
    
    apiVersion: apps/v1beta2
    kind: Deployment
    metadata:
    name: test-deploy
    labels:
    app: test-deploy
    spec:
    replicas: 1
    strategy:
    type: Recreate
    selector:
    matchLabels:
      app: test-deploy
    template:
    metadata:
      name: test-deploy
      labels:
        app: test-deploy
    spec:
      containers:
        - name: main
          image: postgres
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /test
              name: test-mount
      volumes:
        - name: test-mount
          persistentVolumeClaim:
            claimName: test-mount
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: test-mount
    spec:
    volumeName: example-pv
    storageClassName: local-storage
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
      storage: 1Gi

apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: capacity: storage: 1Gi

volumeMode field requires BlockVolume Alpha feature gate to be enabled.

volumeMode: Filesystem accessModes:

This was just a test, to see that the thing is working :+1:

poikilotherm commented 5 years ago

Cool @iwilltry42! :+1:

Let me emphasize a bit on an IMHO important aspect: local storage provider in K8s 1.14 is promising, but (as you noted) does not yet support dynamic provisioning.

This is my only reason to favor rancher/local-path-provisioner over K8s local storage. Following kubernetes-sigs/sig-storage-local-static-provisioner#51, it's going to take a while...

iwilltry42 commented 5 years ago

k3s built-in local storage provider coming: https://twitter.com/ibuildthecloud/status/1167511203108638720

lukasmrtvy commented 4 years ago

k8s local dynamic provisioning issue: https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/issues/51

iwilltry42 commented 4 years ago

k3s has the local-storage storage class built in now: https://rancher.com/docs/k3s/latest/en/storage/

lukasmrtvy commented 4 years ago

@iwilltry42 I noticed that example (https://rancher.com/docs/k3s/latest/en/storage/#pvc-yaml) uses capacity, but according to the docs (https://github.com/rancher/local-path-provisioner#cons), its not possible yet. EDIT: sry, its request, not limit :)

deiwin commented 4 years ago

k3s has the local-storage storage class built in now: https://rancher.com/docs/k3s/latest/en/storage/

That seems to work well with k3d create --volume "<local-path>:/var/lib/rancher/k3s/storage" for persistence. The only issue I'm seeing is that k3d delete doesn't allow the local path provisioner to clean up existing PVC folders and they don't seem to be cleaned up when creating a new cluster using the same storage folder either.

iwilltry42 commented 4 years ago

Hi @deiwin, k3d delete doesn't do any kind of "graceful" shutdown of the managed k3s instances, it simply removes the containers. What would be required to allow for a proper cleanup and what would be the use case for this?

deiwin commented 4 years ago

What would be required to allow for a proper cleanup

If it'd delete all deployments/statefulsets/daemonsets using PVCs and then all PVCs, then I think local-path-provisioner would do the cleanup. It'd have to have some way of knowing that that cleanup's done, though.

what would be the use case for this?

You write above, that "However, one should note that the hostPath /tmp is not persisted on disk when we shutdown the cluster, if we don't declare it as a docker volume/bind." I haven't verified it, but I was thinking that doing k3d create --volume "<local-path>:/var/lib/rancher/k3s/storage" as I mentioned above would help with that persistence.

I'm working on a cluster setup that supports two different use cases: 1) start and stop the same long-running cluster with persistent storage (for a DB) and 2) recreate whole clusters from scratch fairly often.

As I said, I haven't verified this, but I think case (1) requires persistence, but case (2) currently leaves behind data from PVCs that don't exist anymore in new clusters.

As a workaround, I'm currently pairing k3d delete with rm -rf <local-path-for-storage>. That's fairly simple, but I don't know if other users would think to do that.

iwilltry42 commented 3 years ago

Hey :wave: Is there any more need or input for this feature?

vincentgerris commented 3 years ago

What I am missing at least in the documentation is where the data is stored locally. does docker map the /var/lib/rancher/k3s/storage to /tmp ? Here it is documented to point to opt: https://github.com/rancher/local-path-provisioner/blob/master/README.md#usage A clarification / documentation of this is greatly appreciated, as is the option to add a PV to point to a local folder for storage in an existing k3d cluster.

vincentgerris commented 3 years ago

I also think clean up is not needed from k3d when it comes to a mapped local file. a PV could be mapped to multiple clusters and otherwise the cleanup logic has to check for all that. The pvc could be cleaned, but doesn't that just get removed when the whole cluster is removed? I noticed one can add a volume but not a mapping to a file system after the cluster is created (unless I missed this and should run a docker command). If that can be considered a feature of k3d then there could be a delete option too, which would do what @deiwin suggest in a more or less complete way. just my thoughts :)

iwilltry42 commented 3 years ago

Hi @vincentgerris , thanks for your input! The local-path-provisioner is a "feature" (as in auto-deployed service) of K3s, so not directly related to k3d. k3d also does not manage anything about it. You can find the configuration of the provisioner here: https://github.com/k3s-io/k3s/blob/master/manifests/local-storage.yaml and you can edit it e.g. as mentioned here: https://github.com/rancher/k3d/discussions/787#discussioncomment-1459393 . In k3d, the path generated by K3s is /var/lib/rancher/k3s/storage and so you can use k3d cluster create -v /my/local/path:/var/lib/rancher/k3s/storage to map it to some directory on your host.

I noticed one can add a volume but not a mapping to a file system after the cluster is created (unless I missed this and should run a docker command).

Docker does not have any option of adding mounts to existing containers, so there's no possibility for us to achieve this in k3d. You'd have to do the volume mounts upfront when creating the cluster.

Update 1: I created an issue to put up some documentation on K3s features and how to use/modify them in k3d: https://github.com/rancher/k3d/issues/795