We created a backup for cluster and observe that Volumesnapshot resource was getting deleted by argocd. This lead to failed backup.
argocd uses label for maintaining the inventory of the resources it created. If any other resource is having such label then it will prune/delete it.
While doing backup with velero, velero csi plugin copy the PVC label to volumesnapshot. Because of this, argocd assumes the ownership of newly created VS and prune it since it doesn't match with given manifest.
What did you expect to happen:
Volumesnapshot resource should not have been deleted and backup should succeed.
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use velero debug --backup <backupname> --restore <restorename> to generate the support bundle, and attach to this issue, more options please refer to velero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename> or kubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add:
Ideally we should avoid copying label of PVC to VS as we don't know why those labels were used. If this is still required then there should be a flag to enable this functionality.
For linking PVC to VS, VS already have the information of PVC.
Environment:
Velero version (use velero version):
Velero features (use velero client config get features):
Kubernetes version (use kubectl version):
Kubernetes installer & version:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release):
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
:+1: for "I would like to see this bug fixed as soon as possible"
:-1: for "There are more important bugs to focus on right now"
What steps did you take and what happened:
We created a backup for cluster and observe that Volumesnapshot resource was getting deleted by argocd. This lead to failed backup.
argocd uses label for maintaining the inventory of the resources it created. If any other resource is having such label then it will prune/delete it. While doing backup with velero, velero csi plugin copy the PVC label to volumesnapshot. Because of this, argocd assumes the ownership of newly created VS and prune it since it doesn't match with given manifest.
What did you expect to happen: Volumesnapshot resource should not have been deleted and backup should succeed.
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add:
Ideally we should avoid copying label of PVC to VS as we don't know why those labels were used. If this is still required then there should be a flag to enable this functionality.
For linking PVC to VS, VS already have the information of PVC.
Environment:
velero version
):velero client config get features
):kubectl version
):/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.