What steps did you take and what happened:
When performaing PV backup, I specified default-volumes-to-fs-backup in installation CLI, in this case snapshot won't be taken as Velero design, but in fact snapshot were taken.
Velero Install Command:
I specified default-volumes-to-fs-backup in installation instead of backup command.
velero --namespace velero create backup backup-2cb849d5-c052-49fd-9b78-954771ae6445 --wait --include-namespaces kibishii-workload2cb849d5-c052-49fd-9b78-954771ae6445
Backup request "backup-2cb849d5-c052-49fd-9b78-954771ae6445" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
.....................
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-2cb849d5-c052-49fd-9b78-954771ae6445` and `velero backup logs backup-2cb849d5-c052-49fd-9b78-954771ae6445`.
Verify PVB and snapshots:
As you can see in logs below PVBs were created and snapshots were taken also, which is not expected.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
.....................
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-2cb849d5-c052-49fd-9b78-954771ae6445` and `velero backup logs backup-2cb849d5-c052-49fd-9b78-954771ae6445`.
get backup cmd =/velero/workspace/velero-e2e-test/velero/test/e2e/../../_output/bin/linux/amd64/velero --namespace velero backup get -o json backup-2cb849d5-c052-49fd-9b78-954771ae6445
/usr/local/bin/kubectl get podvolumebackup -n velero
/bin/grep kibishii-workload2cb849d5-c052-49fd-9b78-954771ae6445
/usr/bin/awk {print $1}
line: backup-2cb849d5-c052-49fd-9b78-954771ae6445-b6js8
line: backup-2cb849d5-c052-49fd-9b78-954771ae6445-zzjlg
|| VERIFICATION || - Snapshots should not exist in cloud, backup backup-2cb849d5-c052-49fd-9b78-954771ae6445
map[k8s-azure-created-by:0xc000a68180 kubernetes.io-created-for-pv-name:0xc000a681b0 kubernetes.io-created-for-pvc-name:0xc000a68200 kubernetes.io-created-for-pvc-namespace:0xc000a682e0 velero.io-backup:0xc000a68300 velero.io-pv:0xc000a68320 velero.io-storage-location:0xc000a68340]
backup-2cb849d5-c052-49fd-9b78-954771ae6445
map[k8s-azure-created-by:0xc000a68590 kubernetes.io-created-for-pv-name:0xc000a685b0 kubernetes.io-created-for-pvc-name:0xc000a685d0 kubernetes.io-created-for-pvc-namespace:0xc000a685f0 velero.io-backup:0xc000a68610 velero.io-pv:0xc000a68690 velero.io-storage-location:0xc000a68720]
backup-1-5098ae1b-afb1-461c-a43b-3e290b26a7ec
map[k8s-azure-created-by:0xc000a687c0 kubernetes.io-created-for-pv-name:0xc000a687e0 kubernetes.io-created-for-pvc-name:0xc000a68860 kubernetes.io-created-for-pvc-namespace:0xc000a68880 velero.io-backup:0xc000a688a0 velero.io-pv:0xc000a688c0 velero.io-storage-location:0xc000a688e0]
backup-1-88aa2a2c-57ae-4467-af6f-5ae1e8045104
map[k8s-azure-created-by:0xc000a68a70 kubernetes.io-created-for-pv-name:0xc000a68a90 kubernetes.io-created-for-pvc-name:0xc000a68ab0 kubernetes.io-created-for-pvc-namespace:0xc000a68ad0 velero.io-backup:0xc000a68af0 velero.io-pv:0xc000a68b10 velero.io-storage-location:0xc000a68b30]
backup-1-88aa2a2c-57ae-4467-af6f-5ae1e8045104
map[k8s-azure-created-by:0xc000a68ef0 kubernetes.io-created-for-pv-name:0xc000a68f40 kubernetes.io-created-for-pvc-name:0xc000a68f90 kubernetes.io-created-for-pvc-namespace:0xc000a68fb0 velero.io-backup:0xc000a68fd0 velero.io-pv:0xc000a68ff0 velero.io-storage-location:0xc000a69010]
backup-1-5098ae1b-afb1-461c-a43b-3e290b26a7ec
map[k8s-azure-created-by:0xc000a691b0 kubernetes.io-created-for-pv-name:0xc000a691d0 kubernetes.io-created-for-pvc-name:0xc000a691f0 kubernetes.io-created-for-pvc-namespace:0xc000a69210 velero.io-backup:0xc000a69230 velero.io-pv:0xc000a69250 velero.io-storage-location:0xc000a69270]
backup-2cb849d5-c052-49fd-9b78-954771ae6445
What did you expect to happen:
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use velero debug --backup <backupname> --restore <restorename> to generate the support bundle, and attach to this issue, more options please refer to velero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename> or kubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
Velero version (use velero version):
Velero features (use velero client config get features):
Kubernetes version (use kubectl version):
Kubernetes installer & version:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release):
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
:+1: for "I would like to see this bug fixed as soon as possible"
:-1: for "There are more important bugs to focus on right now"
What steps did you take and what happened: When performaing PV backup, I specified
default-volumes-to-fs-backup
in installation CLI, in this case snapshot won't be taken as Velero design, but in fact snapshot were taken.default-volumes-to-fs-backup
in installation instead of backup command.velero install --namespace velero --image gcr.io/velero-gcp/nightly/velero:velero-beed887e --use-node-agent --default-volumes-to-fs-backup --provider azure --backup-location-config xxxxxx --bucket xxxxxx --secret-file xxxxxx --plugins velero/velero-plugin-for-microsoft-azure:main --dry-run --output json"
What did you expect to happen:
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
velero version
):velero client config get features
):kubectl version
):/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.