openebs / lvm-localpv

Dynamically provision Stateful Persistent Node-Local Volumes & Filesystems for Kubernetes that is integrated with a backend LVM2 data storage stack.
Apache License 2.0
245 stars 92 forks source link

How to restore data through snapshot of localpv LVM #175

Open lixiaopengy opened 2 years ago

lixiaopengy commented 2 years ago

I only see examples related to creating snapshots in the document. How can I restore data through the generated snapshots?

pawanpraka1 commented 2 years ago

Restore is not supported yet. We can create a clone volume out of the snapshot. See this https://github.com/openebs/lvm-localpv/issues/13.

murkylife commented 2 years ago

Can you please provide more info on how I can create a clone volume out of a snapshot?

pawanpraka1 commented 2 years ago

@murkylife clone is not supported yet. It is in our roadmap. Could you add you use case to the issue #13 . It will help us prioritize this feature.

murkylife commented 2 years ago

@pawanpraka1 , done, thanks. If there's no support for any restore, whats the point of the snapshot support? Can i access it somehow?

pawanpraka1 commented 2 years ago

@murkylife there are manual steps required to create the clone volume and few manual CRs we have to create to make it available to a new pod.

nkwangleiGIT commented 2 years ago

What's the status of snapshot restore support, is there anyone working on this?

phhutter commented 2 years ago

I'm also quite interested in this feature. Do you have any eta for the 3.3 release?

PraveenJP commented 1 year ago

Any update on this? Please share the manual steps to restore @pawanpraka1

oscar-martin commented 11 months ago

This is how I restored a snapshot.

Assumptions:

Creation of a VolumeSnapshot for a PVC using local-path

Preparation

It creates a PVC and a Pod. Once the Pod is created, enter into the container and create some content inside /data folder.

kubectl apply -f - <<<'
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    test: backup
  name: test-local-path
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Mi
  storageClassName: local-path
  volumeMode: Filesystem
---
apiVersion: v1
kind: Pod
metadata:
  name: test-local-path
  namespace: default
spec:
  containers:
  - image: busybox:latest
    command:
      - tail
      - "-f"
      - "/dev/null"
    imagePullPolicy: IfNotPresent
    name: container
    volumeMounts:
      - mountPath: /data
        name: data
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: test-local-path
'

Creation of a VolumeSnapshot

It creates the VolumeSnapshot instance on the PVC created above.

kubectl apply -f -<<<'
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: test-shared-snapshot
  namespace: default
spec:
  volumeSnapshotClassName: lvmpv-snapclass
  source:
    persistentVolumeClaimName: test-local-path
'

VolumeSnapshot status:

status:
  boundVolumeSnapshotContentName: snapcontent-7ba359d9-2f4f-463b-a533-5fc34784def5
  creationTime: "2023-09-28T07:44:50Z"
  readyToUse: true
  restoreSize: "0"

Restore process

Creation of a LVMVolume

Set .spec.capacity as it was said in the PV.spec.capacity but in bytes!

Set .metadata.name as the VolumeSnapshot.metadata.uid

Set .spec.ownerNodeID as the PV.spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0]

Set .spec.volGroup and .spec.vgPattern as the SC.parameters.volgroup and SC.parameters.vgpattern, where SC is the StorageClass of the PV (.spec.storageClassName of the PV)

If vgPattern is empty in StorageClass, derive it from this pseudo-expression: ^{volgroup}$

kubectl apply -f -<<<'
apiVersion: local.openebs.io/v1alpha1
kind: LVMVolume
metadata:
  labels:
    kubernetes.io/nodename: worker-2
  name: 7ba359d9-2f4f-463b-a533-5fc34784def5
  namespace: default
spec:
  capacity: "5242880"
  ownerNodeID: worker-2
  shared: "no"
  thinProvision: "no"
  vgPattern: ^test$
  volGroup: test
'

LVMVolume status:

status:
  state: Ready

Creation of a PV

Set .spec.capacity as it was said in the PVC.spec.capacity

Add annotation pv.kubernetes.io/provisioned-by: [local.csi.openebs.io](<http://local.csi.openebs.io>) to let the PV to be deleted appropriately

Set .metadata.name to pvc-[VS.metadata.uid]

Set .spec.csi.volumeHandle to VS.metadata.uid

Set .spec.csi.volumeAttributes.[openebs.io/volgroup] to SC.parameters.volgroup

Set .spec.nodeAffinity to OrigPV.spec.nodeAffinity

kubectl apply -f -<<<'
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-7ba359d9-2f4f-463b-a533-5fc34784def5
  annotations:
    pv.kubernetes.io/provisioned-by: local.csi.openebs.io
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 5Mi
  csi:
    driver: local.csi.openebs.io
    fsType: ext4
    volumeAttributes:
      openebs.io/cas-type: localpv-lvm
      openebs.io/volgroup: test
    volumeHandle: 7ba359d9-2f4f-463b-a533-5fc34784def5
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: openebs.io/nodename
          operator: In
          values:
          - worker-2
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-path
  volumeMode: Filesystem
'

PV status:

status:
  phase: Available

Creation of a PVC:

Set .spec.volumeName from PV.metadata.name

kubectl apply -f -<<<'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-local-path-from-snapshot
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Mi
  storageClassName: local-path
  volumeMode: Filesystem
  volumeName: pvc-7ba359d9-2f4f-463b-a533-5fc34784def5
'

PVC status:

status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 5Mi
  phase: Bound

Check the restored volume from a new Pod

kubectl apply -f -<<<'
apiVersion: v1
kind: Pod
metadata:
  name: test-local-path-from-snapshot
  namespace: default
spec:
  containers:
  - image: busybox:latest
    command:
      - tail
      - "-f"
      - "/dev/null"
    imagePullPolicy: IfNotPresent
    name: container
    volumeMounts:
      - mountPath: /data
        name: data
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: test-local-path-from-snapshot
'

Once the pod is running, enter into the container and check the data is in /data.

Tear down

dsharma-dc commented 3 months ago

This is to be added as part of roadmap for enhancements.