vmware-tanzu / velero

Backup and migrate Kubernetes applications and their persistent volumes
https://velero.io
Apache License 2.0
8.42k stars 1.37k forks source link

[Epic] Backup Replication #103

Open jrnt30 opened 6 years ago

jrnt30 commented 6 years ago

User Stories As a cluster administrator, I would like to define a replication policy for my backups which will ensure that copies exist in other availability zones or regions. This will allow me to restore a cluster in case of an AZ or region failure.

Non-Goals

  1. Cross-cloud replication of backups
  2. Cross-account replication of backups

Features


Original Issue Description

There are a few different dimensions of a DR strategy that may be worth consideration. For AWS deployments the trade-offs the complexity of running Multi-AZ are fairly negligible if you stay in the same region. As such the Single Region/Multi-AZ deployment is extremely common.

An additional requirement often is having the ability to restore in another region with more relaxed RTO/RPO in the case of an entire region going down.

Looking over #101 brought a few things to mind, and a large wish list might include:

Some of these are certainly available today to users (copying snapshots and s3 data) but require additional external integrations to function properly. As a user it would be more convenient if this were able to be done in a consolidated way.

ncdc commented 6 years ago

@jbeda some of what @jrnt30 is describing sounds similar to your idea of "backup targets"

jimzim commented 6 years ago

I just was going to post this as a feature request. :)

I just tried to do this from eastus to westus in Azure and started to think about how we could copy the snapshot and create the disk in the correct region. We could possibly have a restore target config? I also like the idea of creating multiple backups to other regions in case a region goes down or a cluster and its resources get deleted.

ncdc commented 6 years ago

@jimzim this is definitely something we need to spec out and do! We've been kicking around the idea of a "backup target", which would replace the current Config kind. You could define as many targets as you wish, and when you perform a backup, you would then specify which target to use. There are some UX issues to reason through here...

jimzim commented 6 years ago

@ncdc Maybe we can discuss this briefly at KubeCon? I have begun to make this work on Azure, but before I go too much further it would be good to talk about what your planned architecture is.

ncdc commented 6 years ago

Sounds great!

On Wed, Nov 29, 2017 at 5:59 PM Jim Zimmerman notifications@github.com wrote:

@ncdc https://github.com/ncdc Maybe we can discuss this briefly at KubeCon? I have begun to make this work on Azure, but before I go too much further it would be good to talk about what your planned architecture is.

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/heptio/ark/issues/103#issuecomment-348025540, or mute the thread https://github.com/notifications/unsubscribe-auth/AAABYoh9BCcUoAc0fXI8AEDsX5VP9gqjks5s7eHsgaJpZM4PingG .

jbeda commented 6 years ago

This is very much what i'm thinking. We need to think about backup targets, restore sources and ways to munge stuff with a pipeline. Sounds like we are all thinking similar things.

rocketraman commented 6 years ago

On Azure, you can create a snapshot into a different resource group than the one that the persistent disk is on, which means the snapshots could be created directly into the AZURE_BACKUP_RESOURCE_GROUP instead of AZURE_RESOURCE_GROUP.

Then, cross-RG restores should be quite simple as the source of the data will always be consistent and there should be no refs to AZURE_RESOURCE_GROUP.

I'm not sure if same-Location is a limitation of this -- I've only tried this on two resource groups that are in the same Azure Location.

The command/output I used to test this:

az snapshot create --name foo --resource-group Ark_Dev-Kube --source '/subscriptions/xxx/resourceGroups/my-Dev-Kube1/providers/Microsoft.Compute/disks/devkube1-dynamic-pvc-0bbf7e11-9e82-11e7-a717-000d3af4357e'
  DiskSizeGb  Location    Name    ProvisioningState    ResourceGroup    TimeCreated
------------  ----------  ------  -------------------  ---------------  --------------------------------
           5  canadaeast  foo     Succeeded            Ark_Dev-Kube     2018-01-09T16:21:58.398476+00:00

and the foo snapshot was created in Ark_Dev-Kube even though the disk is in my-Dev-Kube1.

rosskukulinski commented 6 years ago

For reference, this is the current Ark Backup Replication design.

nrb commented 6 years ago

We've created a document of scenarios that we'll use to inform the design decisions for this project.

We also have a document where we're discussing more detailed changes to the Ark codebase from which we'll generate a list of specific work items.

Members of the heptio-ark@googlegroups.com google group have comment access to both of these documents for anyone who would like to share their thoughts on these.

knee-berts commented 5 years ago

Hello, any updates on this? I have quite a few customers interested in using ARC to DR MD PVs.

skriss commented 5 years ago

@knee-berts no major updates here; we're actively working towards a v1.0 release and this issue will be tackled after that.

We'd definitely be interested in hearing details of your customers' needs so we can make sure that what we're planning on implementing lines up!

muvaf commented 5 years ago

When restore a backup, we don't know the new cluster's availability zone in AWS. Since AWS does not support binding EBS volumes to an EC2 node that is in a different availability zone, we're forced to create the new cluster in the same availability zone. As we'd like to get rid of this requirement, I'm looking forward for this issue to be fixed.

dijitali commented 5 years ago

Similar scenario for us, I think, and we are using the following manual workaround:

# Make a backup on the first cluster
kubectx my-first-cluster
velero backup create my-backup

# Switch to new cluster and restore the backup
kubectx my-second-cluster
velero restore create --from-backup my-backup

# Find the restored disk name
gcloud config configurations activate my-second-project
gcloud compute disks list

# Move the disk to the necessary region
gcloud compute disks move restore-xyz --destination-zone $my-second-cluster-zone

# Ensure the PV is set to use the retain reclaim policy then delete the old resources
kubectl patch pv mongo-volume-mongodb-0 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
kubectl delete statefulset mongodb
kubectl delete pvc mongo-volume-mongodb-0

# Recreate the restored stateful set with references for the new volume
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mongodb
spec:
  selector:
    matchLabels:
      app: mongodb
  serviceName: "mongodb"
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
        - name: mongo
          image: mongo
          command:
            - mongod
            - "--bind_ip"
            - 0.0.0.0
            - "--smallfiles"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: mongo-volume
              mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: mongo-volume
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
      storageClassName: ""
      volumeName: "mongo-volume-mongodb-0"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-volume-mongodb-0
spec:
  storageClassName: ""
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  gcePersistentDisk:
    pdName: "restore-xyz"
    fsType: ext4

EOF
jujugrrr commented 4 years ago

Hi, is there any ETA for this? It sounds like a basic feature to be able to use backup to recover from an AZ failure.

https://docs.google.com/document/d/1vGz53OVAPynrgi5sF0xSfKKr32NogQP-xgXA1PB6xMc/edit#heading=h.yuq6zfblfpvs sounded promising

skriss commented 4 years ago

@jujugrrr we have cross-AZ/region backup & restore on our roadmap. If you're interested in contributing in any way (requirements, design work, etc), please let us know!

cc @stephbman

kmadel commented 3 years ago

You don't need backup replication to support multi-zone and multi-region for GCP/GKE with the K8s VolumeSnapshot beta support of Velero v1.4. See https://github.com/vmware-tanzu/velero/issues/1624#issuecomment-671061689

fluffyf-x commented 3 years ago

Hey, I was wondering if there was any update on this? Or a breakdown of tasks required to complete this epic?

My team is running an AKS cluster with the csi plugin, we've tried rustic as well as restoring VHD from blob to move the snapshots into another region which resulted in:

StatusCode: 409, RawError: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 409, RawError: {
  "error": {
    "code": "OperationNotAllowed",
    "message": "Addition of a blob based disk to VM with managed disks is not supported.",
    "target": "dataDisk"
  }
}
jkupidura14 commented 2 years ago

Is there any update to this? I feel like this could be easily solved by not storing a specific volume ID (snapshot id in the case of AWS) that you want to restore from, but to make a custom tag with a randomly generated ID that Velero uses as a reference when trying to restore. This would make it so that no matter what region or az you copy the storage backup to, Velero would still be able to restore from it when it has the correct ID tag. Just a thought.

joostvdg commented 2 years ago

Any update to this?

We are looking into helping customers replicate volume backups across Cloud Regions (e.g., AWS us-east-1 to us-west-1) with Velero. We did some AWS specific investigations but it was closed because you have something else lined up. Is this ticket the place where we can track this?

johnroach commented 2 years ago

Hi is there any updates in regards to this? Any way someone can help with this?

jglick commented 2 years ago

My very limited understanding from comments by @dsu-igeek at the community meeting of 2021-11-02 is that this sort of feature is on hold pending https://github.com/vmware-tanzu/velero/pull/4077 and a rewrite of volume snapshotters to a new architecture based on Astrolabe, because while it is not particularly hard to implement replication in a particular plugin without a general framework, subtle timing issues (https://github.com/vmware-tanzu/velero/issues/2888) could lead to anomalous behaviors in certain applications which do not tolerate a simple copy of volumes.

iamsamwood commented 2 years ago

Hello, also wondering if there are any updates on this and wondering how I can help.

jcockroft64 commented 1 year ago

I too am wondering about an update. Was this accepted into 1.10?

antonmatsiuk commented 1 year ago

any updates on the topic?

veerendra2 commented 4 months ago

Hello, Any updates on this? We are hoping get this feature soon.

Right now we are trying to implement this by copying the azure disk snapshots to other region with shell/python scripts and update velero output files(to make restore smooth in case).

I was also wondering, anyone tried using CSI Snapshot Data Movement to make backups available in cross region?

UPDATE 16.05.2024