Closed gothub closed 2 years ago
@mbjones @nickatnceas Please review if you have the cycles. If no problems are found, I'll merge these to the main branch. Thx.
https://github.com/DataONEorg/k8s-cluster/blob/develop/storage/data-recovery.md
Great info. If @nickatnceas thinks the examples don't contain any sensitive info (not sure about the volume identifiers, etc.), then I'd say it looks good. One thing that would be nice to add is whether its possible to create a new PV if the PV ends up being deleted fully, using just the ceph RBD or CephFS info. This would probably be mapping the existing ceph storage allocation to a manually created PV definition. Would that work?
ceph-csi RBD based PVs can be created statically, so it is possible to re-map the RBD image to a PV. The details are on the ceph-csi github repo, but I will distill them for the data recovery doc. The CephFS based PVs are already being created statically, so if one of these PVs is deleted, it just needs to be re-created.
Added ./storage/data-recovery.md
If a k8s PV becomes unusable or is deleted, it may be necessary to recover the data written by a k8s application. Document procedures for recovering data for both RBD image based PVs and CephFS bases PVs.