What steps did you take and what happened:
Triggered filesystem backup with restic and removed velero pod during the InProgress phase. PodVolumeBackup failed as expected with the error. Not a blocking issue for 1.14
old string:-
"get a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\""
New string
": get a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\""
What did you expect to happen:
I noticed an colon has been added to the message, also the word get can be replaced with found something similar to restore.failureReason field.
"found a restore with status \"InProgress\" during the server starting, mark it as \"Failed\""
Attached podvolumebackup CR below.
$ oc get podvolumebackup -o yaml backup1-5294a049-2269-11ef-bd2e-845cf3eff33a-h6sf7
apiVersion: velero.io/v1
kind: PodVolumeBackup
metadata:
annotations:
velero.io/pvc-name: postgresql
creationTimestamp: "2024-06-04T11:57:15Z"
generateName: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a-
generation: 3
labels:
velero.io/backup-name: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a
velero.io/backup-uid: e6c54302-f6ca-4a8c-9406-f83e8c9b3337
velero.io/pvc-uid: b1ad14da-223b-42b3-aee6-f906c5d8e5c8
name: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a-h6sf7
namespace: openshift-adp
ownerReferences:
- apiVersion: velero.io/v1
controller: true
kind: Backup
name: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a
uid: e6c54302-f6ca-4a8c-9406-f83e8c9b3337
resourceVersion: "163355"
uid: f721d4b5-1ab4-44f0-8775-5cddbc0bf685
spec:
backupStorageLocation: ts-dpa-1
node: oadp-82541-zqmld-worker-a-2zbhr
pod:
kind: Pod
name: postgresql-1-msnw8
namespace: test-oadp-231
uid: fac6b2dc-f59a-4c2c-8040-204fdf65acfb
repoIdentifier: gs:oadp82541zqmld:/velero-e2e-7789e47c-225f-11ef-b036-845cf3eff33a/restic/test-oadp-231
tags:
backup: backup1-5294a049-2269-11ef-bd2e-845cf3eff33a
backup-uid: e6c54302-f6ca-4a8c-9406-f83e8c9b3337
ns: test-oadp-231
pod: postgresql-1-msnw8
pod-uid: fac6b2dc-f59a-4c2c-8040-204fdf65acfb
pvc-uid: b1ad14da-223b-42b3-aee6-f906c5d8e5c8
volume: postgresql-data
uploaderType: restic
volume: postgresql-data
status:
completionTimestamp: "2024-06-04T11:57:20Z"
message: ': get a podvolumebackup with status "InProgress" during the server starting,
mark it as "Failed"'
phase: Failed
progress: {}
startTimestamp: "2024-06-04T11:57:15Z"
The following information will help us better understand what's going on:
Anything else you would like to add:
Environment:
Velero version (use velero version): velero 1.14
Velero features (use velero client config get features):
Kubernetes version (use kubectl version): OCP 4.16
Kubernetes installer & version:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release):
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
:+1: for "I would like to see this bug fixed as soon as possible"
:-1: for "There are more important bugs to focus on right now"
This piece of code is why the message looks different from before.
We can use some meaningful information to replace the empty string when calling the PodVolumeBackup status update function.
What steps did you take and what happened: Triggered filesystem backup with restic and removed velero pod during the InProgress phase. PodVolumeBackup failed as expected with the error. Not a blocking issue for 1.14
old string:-
"get a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\""
New string
": get a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\""
What did you expect to happen:
I noticed an colon has been added to the message, also the word get can be replaced with found something similar to restore.failureReason field.
"found a restore with status \"InProgress\" during the server starting, mark it as \"Failed\""
Attached podvolumebackup CR below.
The following information will help us better understand what's going on:
Anything else you would like to add:
Environment:
velero version
): velero 1.14velero client config get features
):kubectl version
): OCP 4.16/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.