Open Adam-D-Lewis opened 1 month ago
I think this would be useful for users until the back up and restore mechanism (see https://github.com/nebari-dev/governance/issues/49) is in place.
We can add some further logic to the job definitions in order to install the AWS CLI and upload/download the tarball from a given S3 bucket.
I also have ssh'd into the nfs pod after creating the tarball, moved it to my user home directory, then downloaded it via the Jupyterhub UI so that's an option as well rather than uploading to object storage.
Preliminary Checks
Summary
Consider adding k8s job to file system backup. Using a k8s job is useful over a simple pod when the file system is large and copying all the data takes a long time. If you try and tar everything up from jupyterlab then your server can timeout due to inactivity before copying everything into a tarball. A k8s job gets around this e.g. Something like
and for restore
Steps to Resolve this Issue
-