Open mzimry opened 1 month ago
❯ git checkout v1.13.1
HEAD is now at ea5a89f83 Merge pull request #7500 from ywk253100/240307_1.13.1
~/git/velero tags/v1.13.1
❯ go run ./cmd/velero --include-namespaces="default" dbackup
^C
~/git/velero tags/v1.13.1 11s
❯ go run ./cmd/velero create backup --include-namespaces="default" dbackup
An error occurred: backups.velero.io "dbackup" already exists
exit status 1
~/git/velero tags/v1.13.1 14s
❯ go run ./cmd/velero backup --include-namespaces="default" dbackup
~/git/velero tags/v1.13.1
❯ oc delete backup dbackup -n openshift-adp
backup.velero.io "dbackup" deleted
~/git/velero tags/v1.13.1
❯ go run ./cmd/velero create backup --include-namespaces="default" dbackup
Backup request "dbackup" submitted successfully.
Run `velero backup describe dbackup` or `velero backup logs dbackup` for more details.
~/git/velero tags/v1.13.1
❯ go run ./cmd/velero create restore --from-backup=dbackup --existing-resource-policy=update
Restore request "dbackup-20240811181933" submitted successfully.
Run `velero restore describe dbackup-20240811181933` or `velero restore logs dbackup-20240811181933` for more details.
~/git/velero tags/v1.13.1 6s
❯ kubectl get restore -n openshift-adp dbackup-20240811181933 -oyaml
apiVersion: velero.io/v1
kind: Restore
metadata:
creationTimestamp: "2024-08-11T22:19:34Z"
generation: 1
name: dbackup-20240811181933
namespace: openshift-adp
resourceVersion: "34565341"
uid: b55c8723-144f-49b4-9e66-423a55ad18ca
spec:
backupName: dbackup
existingResourcePolicy: update # <-- works for me
hooks: {}
includedNamespaces:
- '*'
itemOperationTimeout: 0s
uploaderConfig: {}
status: {}
v1.13.1 CLI Works for me. Not a bug. User must've not used the right CLI version.
The title should scope your issue to CLI if it were the case.
@kaovilai , i'm using v.1.13.0 which should support this flag.
Run velero version
@kaovilai :
velero version
Client:
Version: v1.13.0
Git commit: -
Server:
Version: v1.8.1
Well that explains the problem
You need to upgrade/reinstall velero in your cluster. CLI update on your local terminal isn't enough.
Once resolved please close this issue
What steps did you take and what happened:
when creating a restore using
--existing-resource-policy update
. the restore doesn't update existing resources. getting this in the restore description:could not restore, Secret "xxx" already exists. Warning: the in-cluster version is different than the backed-up version.
when checking restore details i can see that this configuration is not set:
Existing Resource Policy: <none>
What did you expect to happen: update existing resources. The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add:
Environment:
Velero version (use
velero version
): v.13.1Velero features (use
velero client config get features
): features:Kubernetes version (use
kubectl version
):Client Version: v1.29.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.7
Kubernetes installer & version:
Cloud provider or hardware configuration: GCP
OS (e.g. from
/etc/os-release
): mac osVote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.