Open soumyapattnaik opened 1 month ago
@kaovilai Please help to check if design #8063 could cover the operations at Finalizing phase.
@Lyndon-Li i dont think the design will be able to the address this issue. Velero doesnt patch VSC during finalizing phase and that is still not covered as part of the design 8063
@soumyapattnaik After reading the descriptioin I'm confused:
In the finalizing phase today, we do a get on volumesnapshot, if it fails due to some transient failures like TLS handshake timeout, velero csi plugin deletes the volumesnapshot and volumesnapshotcontent.
I don't think this matches the line you pasted: https://github.com/vmware-tanzu/velero-plugin-for-csi/blob/e8f7af4b65f0ed6c69d340aefe2257dc25cd013f/internal/backup/volumesnapshot_action.go#L104
Generally, velero removes the volumesnapshot to make sure it doesn't impact the actual snapshot in storage provider when the resource is removed. I don't think it deliberately removes it when a GET
fails.
Could you double check?
https://github.com/vmware-tanzu/velero-plugin-for-csi/pull/177#discussion_r1205712853 The backup phase checking code in VolumeSnapshot BIA was introduced by this comment.
The deletion VolumeSnapshot code is used to purge unneeded VolumeSnapshot after PVC data backup. It's not used for error handling.
if backup.Status.Phase == velerov1api.BackupPhaseFinalizing || backup.Status.Phase == velerov1api.BackupPhaseFinalizingPartiallyFailed {
p.Log.WithField("Backup", fmt.Sprintf("%s/%s", backup.Namespace, backup.Name)).
WithField("BackupPhase", backup.Status.Phase).Debugf("Clean VolumeSnapshots.")
util.DeleteVolumeSnapshot(vs, *vsc, backup, snapshotClient.SnapshotV1(), p.Log)
return item, nil, "", nil, nil
}
@kaovilai Please help to check if design #8063 could cover the operations at Finalizing phase.
Per https://github.com/vmware-tanzu/velero/pull/8063#discussion_r1711883054
I think we have agreement that it can cover finalizing phase.
@reasonerjt - on any transient failure at https://github.com/vmware-tanzu/velero-plugin-for-csi/blob/e8f7af4b65f0ed6c69d340aefe2257dc25cd013f/internal/backup/volumesnapshot_action.go#L98C2-L98C157 this goes into the code - https://github.com/vmware-tanzu/velero-plugin-for-csi/blob/e8f7af4b65f0ed6c69d340aefe2257dc25cd013f/internal/backup/volumesnapshot_action.go#L100
for my customer the get failed with TLS handshake error and then below logs got printed.
Deleting Volumesnapshot XX/XXXX :: {"cmd":"/plugins/velero-plugin-for-csi"} Deleted volumesnapshot with volumesnapshotContent XX/XXXX :: {"cmd":"/plugins/velero-plugin-for-csi"}
Also from our arm traces i could see that our disk snapshot gets cleaned up for this VS. For other VS where the get calls succeed it went into line https://github.com/vmware-tanzu/velero-plugin-for-csi/blob/e8f7af4b65f0ed6c69d340aefe2257dc25cd013f/internal/backup/volumesnapshot_action.go#L104 as pointed by you above.
I see. The error happened while waiting for VolumeSnapshot.Status.ReadyToUse turning into True
.
What is the error reported?
Usually, this only happens when the client-go cannot find the VS or VSC.
the error was a transient error , where the call was not reaching api server because of TLS handshake timeout. The VS and VSC was present during the duration of get call
By saying a transient error of TLS handshake timeout, do you mean the Velero pod lost connection with kube-apiserver?
It could cause Velero's client to fail to read VS and VSC.
If so, this issue is also related to the request for a retry mechanism with kube-apiserver.
yes. correct. building a retry logic here will help as in my customer case this failure was observed for only 1 VS and VSC and for other two VS and VSC there were no issues. Can you please share me the issue # where the retry mechanism with kube-apiserver is being discussed.
https://github.com/vmware-tanzu/velero/pull/8063#discussion_r1711883054
A retry mechanism was discussed there, although it may not cover your case.
Could you give more information about why the kube-apiserver didn't work temporarily?
the setup belongs to one of the customer. I am not sure why kube-apiserver didn't work temporarily.
for one of our customers it's due to api server SSL certificate rotation which means tls wouldn't work temporarily.
@soumyapattnaik
After checking the code, this is a valid issue, the root cause is that the Execute
of CSI BIAv2 is executed again in FinalizeBackup
and the code in the Execute
does not differentiate Finalize
from the first round of backup.
It's a valid issue.
What steps did you take and what happened: In the finalizing phase today, we do a get on volumesnapshot, if it fails due to some transient failures like TLS handshake timeout, velero csi plugin deletes the volumesnapshot and volumesnapshotcontent.
https://github.com/vmware-tanzu/velero-plugin-for-csi/blob/e8f7af4b65f0ed6c69d340aefe2257dc25cd013f/internal/backup/volumesnapshot_action.go#L104
Post delete the backup controller re uploads the backup TarBall.
https://github.com/vmware-tanzu/velero/blob/1ec52beca80975f74f9ed28d6f9c5f7afe67edee/pkg/backup/backup.go#L756
But it does not update CSI related artifacts in the object store.
Because of which there is mismatch between what is there in object store and what is actually backed up.
This has led to other issue in velero- https://github.com/vmware-tanzu/velero/issues/7979
What did you expect to happen:
The expectation is if the snapshot is cleaned up then the corresponding entry should also be removed from object store. Also for transient errors we should have a retry mechanism in velero to retry the get operation atleast and not fail the operation upfront.
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add:
Environment:
velero version
):velero client config get features
):kubectl version
):/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.