local-volume-provider
is a Velero plugin to enable storage directly to native Kubernetes' volume types instead of using Object or Blob storage APIs.
It also supports volume snapshots with Restic. It is designed to service small and air-gapped clusters that may not have access directly to Object Storage APIs like S3.
The plugin leverages the existing Velero service account credentials to mount volumes directly to the velero/node-agent pods. This plugin is also heavily based off of Velero's example plugin.
⚠️ Cautions
Below is a listing of plugin versions and respective Velero versions that are compatible.
Plugin Version | Velero Version |
---|---|
v0.5.x | v1.10.x |
v0.4.x | v1.6.x - v1.9.x |
To deploy the plugin image to a Velero server:
velero plugin add replicated/local-volume-provider:v0.3.3
.
This will re-deploy Velero with the plugin installed.You can configure certain aspects of plugin behavior by customizing the following ConfigMap spec and adding to the Velero namespace. It is based on the Velero Plugin Configuration scheme.
apiVersion: v1
kind: ConfigMap
metadata:
name: local-volume-provider-config
namespace: velero
labels:
velero.io/plugin-config: ""
replicated.com/nfs: ObjectStore
replicated.com/hostpath: ObjectStore
data:
# Useful for local development
fileserverImage: ttl.sh/<your user>/local-volume-provider:12h
# Helps to lock down file permissions to known users/groups on the target volume
securityContextRunAsUser: "1001"
securityContextRunAsGroup: "1001"
securityContextFsGroup: "1001"
# If provided, will clean up all other volumes on the Velero and Node Agent pods
preserveVolumes: "my-bucket,my-other-bucket"
The plugin can be removed with velero plugin remove replicated/local-volume-provider:v0.3.3
.
This does not detach/delete any volumes that were used during operation.
These can be removed manually using kubectl edit
or by re-deploying velero (velero uninstall
and velero install ...
)
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
backupSyncPeriod: 2m0s
provider: replicated.com/hostpath
objectStorage:
# This corresponds to a unique volume name
bucket: hostPath-snapshots
config:
# This path must exist on the host and be writable outside the group
path: /tmp/snapshots
# Must be provided if you're using Restic; [default mount] + [bucket] + [prefix] + "restic"
resticRepoPrefix: /var/velero-local-volume-provider/hostpath-snapshots/restic
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
backupSyncPeriod: 2m0s
provider: replicated.com/nfs
objectStorage:
# This corresponds to a unique volume name
bucket: nfs-snapshots
config:
# Path and server on share
path: /tmp/nfs-snapshots
server: 1.2.3.4
# Must be provided if you're using Restic; [default mount] + [bucket] + [prefix] + "restic"
resticRepoPrefix: /var/velero-local-volume-provider/nfs-snapshots/restic
NOTE
You can't test the PVC object storage plugin on clusters without ReadWriteMany (RWX) storage providers. This means NO K3S, Codeserver or Codespaces.
To build the plugin and fileserver, run
$ make plugin
$ make fileserver
To build the combined image, run
$ make container
This builds an image tagged as replicated/local-volume-provider:main
. If you want to specify a different name or version/tag, run:
$ IMAGE=your-repo/your-name VERSION=your-version-tag make container
To build a temporary image for testing, run
$ make ttl.sh
This builds an image tagged as ttl.sh/<unix user>/local-volume-provider:12h
.
Make sure the plugin will be configured to use the correct security context and development images by applying the optional ConfigMap (edit this configmap first with your username):
velero
install/remove plugin commands):Velero 1.10+
velero install --use-node-agent --uploader-type=restic --use-volume-snapshots=false --namespace velero --no-default-backup-location --no-secret
Velero 1.6-1.9
velero install --use-restic --use-volume-snapshots=false --namespace velero --no-default-backup-location --no-secret
velero plugin add ttl.sh/<user>/local-volume-provider:12h
kubectl apply -f examples/hostPath.yaml
OR, with Velero v1.7.1+
velero backup-location create default --default --bucket my-hostpath-snaps --provider replicated.com/hostpath --config path=/tmp/my-host-path-to-snaps,resticRepoPrefix=/var/velero-local-volume-provider/my-hostpath-snaps/restic
Install Velero with the plugin configured to host path by default:
Velero 1.10+
velero install --use-node-agent --uploader-type=restic --use-volume-snapshots=false --namespace velero --provider replicated.com/hostpath --plugins ttl.sh/<username>/local-volume-provider:12h --bucket my-hostpath-snaps --backup-location-config path=/tmp/my-host-path-to-snaps,resticRepoPrefix=/var/velero-local-volume-provider/my-hostpath-snaps/restic --no-secret
Velero 1.6-1.9
velero install --use-restic --use-volume-snapshots=false --namespace velero --provider replicated.com/hostpath --plugins ttl.sh/<username>/local-volume-provider:12h --bucket my-hostpath-snaps --backup-location-config path=/tmp/my-host-path-to-snaps,resticRepoPrefix=/var/velero-local-volume-provider/my-hostpath-snaps/restic --no-secret
NOTE: Works with Velero v1.7.1+ only
To update a BackupStorageLocation (BSL) in an existing cluster with Velero, you must first delete the BSL and re-create as follows (assuming you are using the BSL created by default):
velero plugin add ttl.sh/<user>/local-volume-provider:12h
velero backup-location delete default --confirm
velero backup-location create default --default --bucket my-hostpath-snaps --provider replicated.com/hostpath --config path=/tmp/my-host-path-to-snaps,resticRepoPrefix=/var/velero-local-volume-provider/my-hostpath-snaps/restic
nobody
.resticRepoPrefix
in you BackupStorageLocation Config. It should point to the restic
directory mountpoint in the Velero containerkubectl -n velero delete backuprepositories.velero.io default-default-<ID>
to have this regenerated.kubectl -n velero delete resticrepositories.velero.io default-default-<ID>
to have this regenerated.