vmware-tanzu / velero

Backup and migrate Kubernetes applications and their persistent volumes
https://velero.io
Apache License 2.0
8.71k stars 1.4k forks source link

Backups Not Being Deleted on Request or TTL #6928

Closed pseymournutanix closed 10 months ago

pseymournutanix commented 1 year ago

What steps did you take and what happened: Backup delete requested

What did you expect to happen: The backup was removed and not in Deleting

Debug bundle attached

Anything else you would like to add:

time="2023-10-09T06:32:50Z" level=info msg="deletion request 'dre-services-beta-daily-fs-20230930130357-7rpjx' removed." backup=dre-services-beta-daily-fs-20230930130357 controller=backup-deletion deletebackuprequest=velero/dre-services-beta-daily-fs-20230930130357-lrsx6 logSource="pkg/controller/backup_deletion_controller.go:500"
time="2023-10-09T06:32:50Z" level=error msg="Unable to download tarball for backup dre-services-beta-daily-fs-20230930130357, skipping associated DeleteItemAction plugins" backup=dre-services-beta-daily-fs-20230930130357 controller=backup-deletion deletebackuprequest=velero/dre-services-beta-daily-fs-20230930130357-lrsx6 error="error copying Backup to temp file: rpc error: code = Unknown desc = error getting object dre-services-beta/backups/dre-services-beta-daily-fs-20230930130357/dre-services-beta-daily-fs-20230930130357.tar.gz: NoSuchKey: The specified key does not exist.\n\tstatus code: 404, request id: 210000+3559+93026522+4972629, host id: 210000+3559+93026522+4972629" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/restore_controller.go:774" error.function=github.com/vmware-tanzu/velero/pkg/controller.downloadToTempFile logSource="pkg/controller/backup_deletion_controller.go:269"
time="2023-10-09T06:32:50Z" level=info msg="Removing PV snapshots" backup=dre-services-beta-daily-fs-20230930130357 controller=backup-deletion deletebackuprequest=velero/dre-services-beta-daily-fs-20230930130357-lrsx6 logSource="pkg/controller/backup_deletion_controller.go:293"
time="2023-10-09T06:32:50Z" level=info msg="Removing pod volume snapshots" backup=dre-services-beta-daily-fs-20230930130357 controller=backup-deletion deletebackuprequest=velero/dre-services-beta-daily-fs-20230930130357-lrsx6 logSource="pkg/controller/backup_deletion_controller.go:318"
time="2023-10-09T06:32:50Z" level=info msg="Founding existing repo" backupLocation=default logSource="pkg/repository/ensurer.go:85" repositoryType=kopia volumeNamespace=depsdb
time="2023-10-09T06:32:52Z" level=warning msg="active indexes [xs0_2efea80920463c895fa90bd121105d00-s779091e80502d4c8-c1 xs1_de2ccbfe718a2945e4c374f7b92de12c-s8bd40c898fc40152-c1 xs2_b95b4938b89bd10d7cc647da11ee661f-sdbcbee9ec3e3956d-c1 xs3_24b8597af0200c1e5b58abcfd7be9252-scb86b8dc77fa6d53-c1 xs4_e376fa4bce775cc4838537acbf9e2bd2-sfc9e83b9a6194d36-c1 xs5_1ec5e20307cc57e015c73dd5e4611d0b-s98fd747af26ad087-c1 xs6_2060d9294b4f2b84adec833831797bf9-s56a25c6f3e4024b7-c1 xn7_07ea36f814a51d4eb8422b7e6e929f43-s5cb433dc3b0da0e3121-c1 xn7_0839862359eb42a7b5420c7333d36e5b-s960fceba5b004fbd121-c1 xn7_0cf773c3d5fab29240288312a6d19c53-se2b7ef5bf2331ddf121-c1 xn7_0fc66b7646d3cb654fcde1150f5e2dbc-s5c838959d945e011121-c1 xn7_1345a35c8cc24e422d7d39967ce8b2b3-sa7bf147a9b61aaa6121-c1 xn7_15159143c974da15df512e45204f1af5-s6a4a64528a9b7094121-c1 xn7_23ba1624d9313fd56d1daac07288d1ae-s89fc20bd89b54972121-c1 xn7_2c9ad2d43c9378a9b904e9edd441b3b5-sfd492350541aab7a121-c1 xn7_32f4348d3db2f4fb38b42be7b47e3612-sf23532d3e758494b121-c1 xn7_332cf0048869618d3dd7b643e4c6b4ec-s4048eee862123186121-c1 xn7_35e7908adf15a2c78a1324672976eb44-sa31390de0b5f3744121-c1 xn7_3c1ee8c85d65bcd3a30ce17cfc56d08d-sae502d084b9c0786121-c1 xn7_3e1fa5371283c2ccec4bf667c21f4c3f-s84ac8ab41cb34e2e121-c1 xn7_4641a4401813b12fa7e288193896e33c-s51d98f40805d2ec8121-c1 xn7_49559032edb5d392c64b63b9fdd89739-s836f35fa8cd56abf121-c1 xn7_49783f9ebbac655872099d586475cc99-sf25de4627baf9b48121-c1 xn7_4a6b97315c288bf64032f322019e7718-s5815a301a50aa65c121-c1 xn7_535518d5a224818d5955fd54a748c715-s80ef2c8168294850121-c1 xn7_5364d8f5f7f64a9e1ff13d36f15da839-s0bd39be509c1615d121-c1 xn7_5adf67ebce787020f52f0af44f636ffe-sed3be410014d2430121-c1 xn7_5d125cadfda8abe21ce250773ec485ec-s090461d1647133dc121-c1 xn7_63c9f4fe05685cf38778ad844f4d912c-sf9b3abd5e815762e121-c1 xn7_65249b127a35a4d4c9261167a21ee971-se2eff919f451355b121-c1 xn7_65740d240776b884b9f399e45dead980-s3bc9c3755f7f78c4121-c1 xn7_6c25fd8d6ef52b00a832950a6c486020-s4cd5d910bf7445ee121-c1 xn7_6c554247acdf0673e5e3437454fb2a97-s5203b4e849764896121-c1 xn7_6e22ea92ee81e49473c6db6adcf3243d-se6bcd8bee6d20f02121-c1 xn7_6e2de2fc9ac9988ce42b81704df8c1a5-s05c27d27d2d31cfe121-c1 xn7_7e32598c88d998ad6bdd0a677c142292-sb4d0d8d8f5ebff34121-c1 xn7_8001f16fb2fa844cbee48ce8df0597ae-s158d9eb83c81b71e121-c1 xn7_84aae8cd118a2df5a30a1ec411fba91b-s96c07ca6bdb0603d121-c1 xn7_8abc9cb976de677d69f4db046ba85d07-sa38fb7da20b22dc5121-c1 xn7_8c5e390a84c80dade1772e816e89caa3-s290fc0172085f92f121-c1 xn7_9332efc0d36ae9aa415ae0b342e86c77-s722b2203d833c9b3121-c1 xn7_9435d46c244dd2f09994947a8fb7e21b-s3cfa67ea3f48f3e3121-c1 xn7_9c528199650b15d72df7544fa4351897-s66ad2b74e957de5f121-c1 xn7_af914859d2c51c5091e3c4b55d60c22e-sa3855dc62dbb188f121-c1 xn7_b71215d1d1cc3b6da7866a07ec6f9cd4-sd4728898ac4d0196121-c1 xn7_c856dbdc3abd4791f3733ed83c516cac-s4f688a11c3bd2801121-c1 xn7_cb39d19817c50ab973d5705b576eadf6-s17312c66a59a451f121-c1 xn7_cf6f024d031aae6e35cf5e6ff326debe-sa8241b6556338254121-c1 xn7_d5488229f3a7e0a5891e234acecd707a-scb522254da72d7f8121-c1 xn7_dbe091ed5a450108b443d938da7312ee-s84b6683d1e2e0613121-c1 xn7_e157c8ee3e229d5039ea4de6105e48db-s89debd8508a72960121-c1 xn7_e158d7ab417daae87c2f4b73765ec477-s020f01580794647a121-c1 xn7_e2b758a57aa1e8a6fc05726d7bdf40e7-s22187138af630e24121-c1 xn7_ea64ffedc1d475d4415949cf791d0383-s39b0228ab8572761121-c1 xn7_f44368c6aaa8efc665a6a427efa69e4d-s0f76f5e244421a38121-c1 xn7_f6fb14a1c70e2291be792ce5c024f90b-s0cd67fd6285a69da121-c1 xn7_f9541d2920a89e0618ef81a61d905ebc-s03ad704fd66e383f121-c1 xn7_fc35108f2e3b1d25b8a3b531212d824d-s8643151ed310a4eb121-c1 xn7_ff5c59f6c98d953315d239dc45cd5eec-s02c202021222e27b121-c1 xn8_0916b082aee15faec070c919e5062b3d-s3893245394335a9d121-c1 xn8_154aa2bf50f89eadc7f755a34e7b089c-se21ecccf31b6ebf0121-c1 xn8_25f649fb1ed4387990b5cbdd3badffb5-sbfb71613a87d9aa1121-c1 xn8_2da504c5c7c86ee8bcfbd10e6d1029c0-s8d5ecad22bb2f432121-c1 xn8_368db89d8c7c5b7808f8cd7a28f628ff-s840a24cdccc70f7e121-c1 xn8_37178c84d502c29171aaee3ad993d8dd-s888f883d4c860096121-c1 xn8_47f2dc97858b940de34a3ee819afb658-sefa5237d14b8445d121-c1 xn8_4cca9a040e25125399b604dca901d62f-s48393345217d1da6121-c1 xn8_6347b1d7808b1c29be9f09f6c8d21988-sfc4518962ca9c050121-c1 xn8_6b29a6b8a62a2145d20f64a832f44afb-s792b2dfb7c82a51c121-c1 xn8_7db0b3dd00dd52635521b5687f8a380d-s5264f3a35ebd0618121-c1 xn8_84030e5127ec336316891c3ce3cf745f-sddc8c5dba1ed2182121-c1 xn8_8ed7994518670cbc035f2ea76d76ee29-s24c939e78e9b48b4121-c1 xn8_9bb55a996c04071540f636c450dee8d2-sfa47b7380812741f121-c1 xn8_a11c860b46e8c6991698da67b0564ea8-s982e483ae483809e121-c1 xn8_a510c708fbe3e5beaf2e32ac0992adba-sbd75c842b9709e38121-c1 xn8_a7f1fea760c2c513f2e49597562a8147-s73427edbf4491efa121-c1 xn8_b9fbabb1880fe99b9c8d2a94512cd550-s5e53e9ddfa809ef8121-c1 xn8_bc8e71562de8b637a18da7a626b20cab-s6e54d3d1f73c29ba121-c1 xn8_cb46917ceafbf68116c5aaf77bbfc528-s1077b0c56503439f121-c1 xn8_d120d33b1255412cbd478f274aa1904e-s95f585dc01bc77fe121-c1 xn8_d242e111aa49338468daa8e7784c68c8-s029e6bf52d07564c121-c1] deletion watermark 2023-10-07 16:14:18 +0000 UTC" logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:101" sublevel=error

Environment:

bundle-2023-10-09-07-43-06.tar.gz

Vote on this issue!

pseymournutanix commented 1 year ago

For the record this is using Nutanix Objects as the store which is S3 compatible.

allenxu404 commented 1 year ago

Taking a quick look at the provided bundle, I noticed that the log content was not properly captured in the log file. Can you help to regenrate a new one that includes a full log for further troubleshooting of this issue.?

Based on current bundle, I found several error messages below recorded in describe file:

Errors:
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition

Upon further examination of these errors, it apprears that Velero kept waiting for backup repository to become ready before hitting timeout.

pseymournutanix commented 1 year ago

Thank you, currently sitting like this across multiple clusters, and 2 different object stores.

NAME                                          STATUS            ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
dre-services-beta-daily-fs-20231012020026     PartiallyFailed   5        1          2023-10-12 03:00:26 +0100 BST   2d        default            <none>
dre-services-beta-daily-fs-20231011020025     PartiallyFailed   5        1          2023-10-11 03:00:25 +0100 BST   1d        default            <none>
dre-services-beta-daily-fs-20231010020023     PartiallyFailed   5        1          2023-10-10 03:00:24 +0100 BST   11h       default            <none>
dre-services-beta-daily-fs-20231009020022     Deleting          4        1          2023-10-09 03:00:23 +0100 BST   12h ago   default            <none>
dre-services-beta-daily-fs-20231008020021     Deleting          4        1          2023-10-08 03:00:21 +0100 BST   1d ago    default            <none>
dre-services-beta-daily-fs-20231007020020     Deleting          4        1          2023-10-07 03:00:20 +0100 BST   2d ago    default            <none>
dre-services-beta-daily-fs-20231006020019     Deleting          4        1          2023-10-06 03:00:19 +0100 BST   3d ago    default            <none>
dre-services-beta-daily-fs-20231005020018     Deleting          4        1          2023-10-05 03:00:18 +0100 BST   4d ago    default            <none>
dre-services-beta-daily-fs-20231004020017     Deleting          4        1          2023-10-04 03:00:17 +0100 BST   5d ago    default            <none>
dre-services-beta-daily-fs-20231003020016     Deleting          4        1          2023-10-03 03:00:16 +0100 BST   6d ago    default            <none>
dre-services-beta-daily-fs-20231002020011     Deleting          4        1          2023-10-02 03:00:11 +0100 BST   5d ago    default            <none>
dre-services-beta-daily-fs-20231001020010     Deleting          4        1          2023-10-01 03:00:10 +0100 BST   6d ago    default            <none>
dre-services-beta-daily-fs-20230930130357     Deleting          4        1          2023-09-30 14:03:58 +0100 BST   7d ago    default            <none>

bundle-2023-10-12-15-22-46.tar.gz

pseymournutanix commented 1 year ago

This is a bundle from another cluster to another object store same versions of everything just this is production. bundle-2023-10-12-15-25-23.tar.gz

MrOffline77 commented 1 year ago

same issue here with different S3 providers and velero 1.12.0. I have multiple Backups stuck in deletion phase. Downgrade to velero 1.11.1 worked for me until this works again.

Deletion Attempts (1 failed):
  2023-10-16 09:10:23 +0200 CEST: Processed
  Errors:
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
Michaelpalacce commented 1 year ago

Have the same issue:

NAME                     STATUS      ERRORS   WARNINGS   CREATED                          EXPIRES   STORAGE LOCATION   SELECTOR
general-20231025030040   Completed   0        6          2023-10-25 06:00:40 +0300 EEST   3d        backblaze          <none>
general-20231024030039   Completed   0        6          2023-10-24 06:00:39 +0300 EEST   2d        backblaze          <none>
general-20231023030038   Completed   0        6          2023-10-23 06:00:38 +0300 EEST   1d        backblaze          <none>
general-20231022030037   Completed   0        6          2023-10-22 06:00:37 +0300 EEST   20h       backblaze          <none>
general-20231021030036   Deleting    0        6          2023-10-21 06:00:36 +0300 EEST   3h ago    backblaze          <none>
general-20231020222327   Deleting    0        6          2023-10-21 01:23:27 +0300 EEST   8h ago    backblaze          <none>

And the describe for the backup:

Name:         general-20231021030036
Namespace:    velero
Labels:       kustomize.toolkit.fluxcd.io/name=configs
              kustomize.toolkit.fluxcd.io/namespace=flux-system
              velero.io/schedule-name=general
              velero.io/storage-location=backblaze
Annotations:  velero.io/resource-timeout=10m0s
              velero.io/source-cluster-k8s-gitversion=v1.28.2+k3s1
              velero.io/source-cluster-k8s-major-version=1
              velero.io/source-cluster-k8s-minor-version=28

Phase:  Deleting

Errors:    0
Warnings:  6

Namespaces:
  Included:  simplesecrets, vaultwarden, postgresql, nodered, changedetection, mealie, media, freshrss
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  <none>

Or label selector:  <none>

Storage Location:  backblaze

Velero-Native Snapshot PVs:  true
Snapshot Move Data:          auto
Data Mover:                  velero

TTL:  96h0m0s

CSISnapshotTimeout:    10m0s
ItemOperationTimeout:  4h0m0s

Hooks:  <none>

Backup Format Version:  1.1.0

Started:    2023-10-21 06:00:36 +0300 EEST
Completed:  2023-10-21 06:07:03 +0300 EEST

Expiration:  2023-10-25 06:00:36 +0300 EEST

Total items to be backed up:  475
Items backed up:              475

Velero-Native Snapshots: <none included>

Deletion Attempts (1 failed):
  2023-10-25 09:22:36 +0300 EEST: Processed
  Errors:
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition
    failed to wait BackupRepository: timed out waiting for the condition

restic Backups (specify --details for more information):
  Completed:  19

And the backup locations:

NAME        PROVIDER   BUCKET/PREFIX   PHASE       LAST VALIDATED                   ACCESS MODE   DEFAULT
backblaze   aws        sgenov          Available   2023-10-25 09:56:50 +0300 EEST   ReadWrite     true
lukasertl commented 11 months ago

I'm having similar problems.

Seems to affect only backups that had errors - those without errors can be deleted (or expire) without problems.

khanhngobackend commented 11 months ago

FYI, I also have the same problems with v1.12.1

vikingtoby commented 11 months ago

I also have same issue with v1.12.2. Also affects backups without any errors.

aarononeal commented 11 months ago

Note, this bug affects backups having any errors or warnings. Backups that have 0 for both do get deleted. Given daily backups, this is really starting to accumulate.

lupa95 commented 11 months ago

Same issue here on 1.12.2 and Kubernetes 1.25.

AndrzejOlender commented 10 months ago

Same issue on 1.12.2, Kubernetes 1.28

Lyndon-Li commented 10 months ago

When deleting a PVB backup, Velero connects all the snapshots in the repo and delete them one by one. There may be many snapshots in one backup. There is a timeout(1 min) set for PVB to delete snapshots, this timeout may not be enough when there are many snapshots.

pseymournutanix commented 10 months ago

Thanks is there an ETA for then this will be available in 1.13-rc or any other release I would be willing to try.

Lyndon-Li commented 10 months ago

Please verify it through 1.12.3 which will be RC around the end of this week

pseymournutanix commented 10 months ago

Will do. Thanks.

pseymournutanix commented 10 months ago

Have installed RC on a couple of clusters, backups in "Deleting" state didn't get cleaned (have to see going forward) but should it have ? Seeing a lot of these messages in the logs

time="2024-01-09T07:27:28Z" level=warning msg="active indexes [xr0_7_f58694685a31661731b5e54144a8c1e0-sf94a07eadb3d5ae4-c1 xr8_15_ae98b6e17639561695d7db9a89c15258-s59d3383102526521-c1 xr16_23_7fe4cd6663b93498045ebe56479db669-sddad97e6f129beeb-c1 xr24_31_da9f6c2355c075bdc751c6e9088f425e-s0828790222560474-c1 xr32_39_9303f6d188abb3665ee9e101ac052a00-s4f15164766763dba-c1 xr40_47_010c1f9b3bfec23f3365a71c650e22f5-s2bba78d09f2ba438-c1 xr48_55_4114a1ca986c92896f6024b7a8445b1f-sb0215d2bb2bd8e4d-c1 xr56_63_c6ac451bc9078ef0c22c891a1ff0671d-sf5f7210c87c5a00d-c1 xr64_71_aa5afb973f9e12c6e89b0ec00a11f777-sedb4ac9460e2c94e-c1 xr72_79_0f92610ce4b4b7ef1ed8714d51fa1a11-sb9154b88928f31f4-c1 xr80_87_dda5447c941c1fb24de2ac9c268a8193-sd2049078435926b5-c1 xs88_ad43e52eaaa54363e71e8d3f0e3a236e-s23fe1cae15abe46b-c1 xs89_d3d08dc2e6e1a10c626d438eaa1a6b42-s73461b537e170957-c1 xs90_e9d50b75b5d8a0bf89c721137bb5d1a9-s452b207a74df985e-c1 xn91_00c77141661b5c1eaffa51eaf658a677-s1562902140532ee0124-c1 xn91_088953383db6537eccac7c92b6cdc4f6-sb8be07cf297d5e73124-c1 xn91_095e73a976e919093d2e79c1cdf3f94a-s9d2dec1147c9854e124-c1 xn91_0d5983e2791767ffe19023c2e93b4df5-sdb62b1e88e1b3271124-c1 xn91_0e8bb021551097e1cc32f8808963d36c-s09bccd1dd1af59e2124-c1 xn91_0f26b6e69c57df0546c28dc2e3800b94-s006271c54393509e124-c1 xn91_1abb81122b06361162872063ccddbdfb-s316740c5deff7411124-c1 xn91_1b2e42654aa3d81b2c10fca1592c3483-s238d621d56c63994124-c1 xn91_2939a034e51e13f8b6206fb7ce84fa49-sa2a331232355b96e124-c1 xn91_3f4866bad56548befbae8777f22fd465-s7f02f2e9041859ca124-c1 xn91_40ea20e07def9a80b6eefb355c3a5fe8-s205088979b02b9f9124-c1 xn91_4d7b729a21bb56ec075db811d2904525-s8981e9e80d473879124-c1 xn91_5798249a8691bbf32bdb46b722e117fa-sb26ecb214e102113124-c1 xn91_5d9eec63719d1fd74d6fe397cb1614b9-s8c720ece069aa613124-c1 xn91_5ddd418c2cf2bbb5724890532c7d1d2b-s29d07bc67d8582e5124-c1 xn91_616a28319b1cb7ff57be6e2888d708a4-sa99ad7e299bbf998124-c1 xn91_6302cf8915d815e2fcd28fffb069af94-s2edb878eec182cf7124-c1 xn91_64033fd03dda34f7f98abb7123e6114f-se40b81a7172421ff124-c1 xn91_64b1a832c954b699e3e8ec72a0007c66-s934b7a4838b2b560124-c1 xn91_6559456c530a680b7498be06e5970363-s66dd6902fd0c76ae124-c1 xn91_8d4e00b45e01a2258f7e629210b04653-s47589f38a6957d99124-c1 xn91_901a0f394fa3c2d59a51410f109e08b8-s400ab40896d09a08124-c1 xn91_979ef7671bdd931da410faa9172a45f3-sb6f52ce2bf4a1c25124-c1 xn91_9bca7d8256222742e03a025184805b78-s10c29e84ef895315124-c1 xn91_9c897bc865c122a565d5770af27fe4c4-sd0d4a03bfbe48631124-c1 xn91_9f4885e3ab6a016878a27eefb53d5234-s61c9d13c43a838d1124-c1 xn91_a5e4cddcb298292113ecc4696d39a950-s4aaf9a8bf1ecc1bc124-c1 xn91_a9f779a59256ad1442b553616b33978f-s57fa8881a227a77a124-c1 xn91_b6916f61dbb9c9e87f284a4da1033266-s6697eca8995c81db124-c1 xn91_b96dbfb17b57f0dccc1090c654cd9c36-sf83c180aecc26b4f124-c1 xn91_bb0366d94b6bb0b2087c5658744c909a-s5a6be684d42aed52124-c1 xn91_c1790529462f24201a0f623b786acb1d-s2ff7a4409e44396c124-c1 xn91_c3b60febf927adecb5acf3832528d9c9-s4226dab371782a6a124-c1 xn91_c99fa43468685a276f257930442ef4f5-s007c65fe29a53021124-c1 xn91_cdab1fede3f1e720e8b0e88b9db5680e-s6ce7479c3b0a399b124-c1 xn91_ce6f0d524ba85ebe5b90fa460aaee742-sd01da58a9a89aa40124-c1 xn91_d093b8ac9d36407147a641f470ea11c7-sfe462d3d3135aa9c124-c1 xn91_d61eef47aa15f2113aa5715cdd6d5d7d-sf23fd6f5e0f83b79124-c1 xn91_d8f4020366311b1e70ee03a6bf0b9235-s2d2ffff5f47e23b6124-c1 xn91_dd0cbba8bab6573beb2c6b9f0290de01-s19f7182c8ca9376b124-c1 xn91_e117e94bcce6a9f4dfb26c70f5621d06-sa1b20d2cbf41343f124-c1 xn91_e8cad52a77a640d9b19f6afca7f8b4c9-s6b092814f8f3941e124-c1 xn91_ef0488a86a64bdd4be1038f9a8509fbc-se4cbd109121f5c3b124-c1 xn91_f951c0c3b2bd4f340981139c738f4718-s02f8a124453b079e124-c1 xn91_fa62317f4694c65632234a6d4f347ce4-s527d0af1f07213a3124-c1 xn91_fd5af76af5b4a8757f3b0667fbf09829-s1166e9acd19fb132124-c1 xn92_01f6f5b5b13c91e90ea3d7a6434d7169-s3051a7947aa26b99124-c1 xn92_05919e27f1aacab0d4c7e962d705c6ff-s0c3dced090c15324124-c1 xn92_076962cc7cc162b006d59daf9845fc00-sb80181e3ecad026c124-c1 xn92_09144b140e94bd911531e33b20ff46c5-sb98542235c23d05f124-c1 xn92_0a4d0685ef9b5539adb9d267ca176f78-s6587788d50e859e1124-c1 xn92_0aaa6e56a5f331d5ae49f6fe36630fae-s2f00fb620c7c76f3124-c1 xn92_0da2a5511d947ca2a58857eb7f0e23eb-sced50090a04dddfe124-c1 xn92_10a3ff88b4ffc83fce09c51fc3818199-scd3e63952c8e716a124-c1 xn92_131a26d4264fbd4a967f481a101178ff-s57f87740c8ac787a124-c1 xn92_1392aa70890caf9292be692e7c731f98-s2203951c95e4721f124-c1 xn92_14f11e641f92a07694a9be58b158403b-s9499ac688f236cb7124-c1 xn92_16499cc0774608f79dfb3dfff12c9a35-s7277fdd14d282808124-c1 xn92_16d0e1b57f8a156fc29372d08d960c33-s33fd1ecca1a31db1124-c1 xn92_1852c937828b6eed4d89dc0739382578-s287a077cb32a55e2124-c1 xn92_22fd1d2495a38f4a9478e70bb02be0b4-s1e0b76a111ca0758124-c1 xn92_2321976171693fd888ba97222165ce7e-s07f15e851fda4c3f124-c1 xn92_28eb1aecde79ab524c56703bf546f7b4-s5de9bf1120be4d6f124-c1 xn92_2cae00b7528d75707826ce0935d4e166-sbf98ebc4b52de286124-c1 xn92_2fdb906cb422b842bc8b73dbdfa462d5-se8b968fb3636c384124-c1 xn92_33d7be494e8fcc82f3733e49bd078467-s318b6d28464f46cb124-c1 xn92_3c30d2f829543dbb3b0911ea3e41244f-sae9509085d8208b7124-c1 xn92_3d4f9894edb4f1c9bf494e761f228a61-s76671735a2038a0a124-c1 xn92_3db22461fc9109258cc5a692c31e52e8-s80197b52e2438065124-c1 xn92_4b7df11436a8cfa491422f4effcab924-s0800519dfad19cd0124-c1 xn92_4ec5d0589fa724022a81531c2921d71f-s776d08d785d78b55124-c1 xn92_531afb392aba110d84e1e227b353b926-sb59f3f4aa16e84a3124-c1 xn92_5323eea411fdf1f2e5623034360d5319-sef16a98b632b9346124-c1 xn92_57f1e206cc363bd04c416e75f14de2c3-sd51b625cd54e8d60124-c1 xn92_5e8cb6f3b5c86b00051ac606cf6252a7-sf74ed435d701cd46124-c1 xn92_5ee188e7f4989e9e76a327958e090b17-se2f6119baf5c5781124-c1 xn92_621a98623ae8d30340efbfd8c2da0e96-s7a0b3378a0ecbdf5124-c1 xn92_635fbf9ec23908edba2484fda70a699f-sbbd87fff23555d90124-c1 xn92_67f0afe39746445b1c4faa3598bb9cf9-s89cff27221dfa696124-c1 xn92_694ca465dc4e9a4d80f3349cddde9b55-s74d02317d563ca1e124-c1 xn92_6d74c850faa980f622d07145107f6e96-s3e8cba70a361e95e124-c1 xn92_760939199c4fcab6e3b5915748148d2d-s2fe63ed8d2de6fc7124-c1 xn92_8012f97b0b0ecad5e6288a10c5b5287f-s5ebfa232471a8610124-c1 xn92_812badfb43566296d6e5a412567bdefa-sdd0dd9d3688fecb1124-c1 xn92_82e706eba6394f5ca6ba501a357566b9-sdb0f5f06418bcc74124-c1 xn92_83a6eab3b0e0d87250aa6279194e2510-se6bc509f84fcdb88124-c1 xn92_85def16fbc6808a42c87ad535d8d9c72-sbdaf8cbc4095aa9c124-c1 xn92_8b4a9b872a08b2baf5a196b3f3417517-sfac726bb7bf84dd9124-c1 xn92_8dc71b5858a417921fae6f2e43ef3281-s4fce9355e2003802124-c1 xn92_90d5e37646e93f889ae93f1e20639de8-sfd1702e20c7e078d124-c1 xn92_9108a7ee2d4331f19a88adef7627d3b3-s4ebca2cc824bf636124-c1 xn92_92b43227970b1d0b4a1cb5f019ed6e27-s49729682f937d645124-c1 xn92_9f24bb9befc70565b92c298da165b80f-s7bf2ab2c65969cce124-c1 xn92_a6fcc89acc77c93c265a05a2b764cbef-s3a9c7ef3b150cc27124-c1 xn92_a88d34c53f29f259beaf6cd9593d90f6-s0c9689ad98e03fec124-c1 xn92_b1053d80e2bec89ab0fc804d07975080-sc849c3b5f78bd307124-c1 xn92_b4b21506ef78020a980f6eb1caeec516-s9891973c5fabc6cc124-c1 xn92_b75334bd2e932385b9569293be652d40-s657da6033422aa24124-c1 xn92_b9d46124b8688de2d7a1e549144b5ebd-sfebb592335979d92124-c1 xn92_ba937eb7610538aa2a63094413a14a82-s277e372ee78fbacd124-c1 xn92_bc129c3701ab89cdbc0310e6b29fccbe-s6e97455c8020bdf9124-c1 xn92_bdefbc1d035ca2b33956417bba974a02-s0d1ec48c4db49949124-c1 xn92_c45c8680f2b4a2c67116041733158726-sc7c00bec54435be3124-c1 xn92_d0d1cc98abaf3025460427c74aeae2ae-s0e620d1475a0ca1a124-c1 xn92_d630a6b706fc881524cd6ac3710e5d6c-sc74046dbbd3f5445124-c1 xn92_d7ab8bff47dc54f2199fe19e1bb8837a-s7c75284e91979cf3124-c1 xn92_d9545100a4bf4bae86320a9e4bb46cc9-s00954214c9878c02124-c1 xn92_da52e8fff7a37116e199abaea444af37-sb8095b626415c586124-c1 xn92_e636c4193db816fc0533389522f6e2e6-s584543d35567da88124-c1 xn92_e8af9e9772ebe34c591be7be22dcb5fd-s552cd05c4875d289124-c1 xn92_e8e6eb2e7eabbb28754dab3babc936f6-s5d60e85d32e705ac124-c1 xn92_eb924e396e8bf55f2a5681d2262846d0-s48a8be89ac5a6fc9124-c1 xn92_f57bd3e4ca7b3ebfcb358d012c57927a-se4b59447b0288c83124-c1 xn92_f7089a5d55129ad567bfa2fe606c1b7f-sc7faac670846df31124-c1 xn92_f8b13ec2b0f914fcc600af7321c888c5-s4947abf5b2d5b4a4124-c1 xn92_fee39eae12f2d68d0195fbbf76e22422-s69ba44b35dcfd52a124-c1 xn92_ffe4f70bd6ce0e470fa7506c7d20150d-s4c541330ce76b827124-c1] deletion watermark 2024-01-07 16:39:49 +0000 UTC" logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:101" sublevel=error
time="2024-01-09T07:27:29Z" level=warning msg="active indexes [xr0_7_245e1b4348912db61b5abbd46315a3d5-se5f712cf4a5fc6e4-c1 xr8_15_d5fe1d7199c37daa55ad119183a8eba5-s92998149d2dafae5-c1 xs16_804bf65ed82aa6e8422cff2a162050fe-s23dfefa75412e185-c1 xs17_d81ec2467f67997fcff1474a80b0df37-s7d359246f51c95f6-c1 xs18_6c61202ab708649e78269ec604855f2b-s0084a5edcb4a7466-c1 xn19_013652c5ec2bc6e1b7175da639b523db-sdef68bc98d5fecb1124-c1 xn19_0239ed810cdf74f6a44d2a515df39e7e-s485dd74e41a9890d124-c1 xn19_04e723c93e3dfaa099d77e001f6dfa03-s8bdd69d8fae5b853124-c1 xn19_0510a3fd41040c3c005feab545ae9130-s1846ae54e2da92cb124-c1 xn19_0fbd96e9424cbd8b9ac09d4cf7fae1df-sbe0a8ea7577e96d7124-c1 xn19_1e0ad81d16c580698232dd586dfa2fb6-s787a00cdc6c58e40124-c1 xn19_38a11adebfef880149b22a28b90c0023-sa26e1126ef712813124-c1 xn19_38a77965c4752223a403de738a75022a-s47453901de009bb7124-c1 xn19_766c8ffc189938f3bc2bb5a9d50b0c0c-s5c752e1e120abdc0124-c1 xn19_93ad42ff0ff8a90d10c502f8400410c8-sbdec7d94d369499d124-c1 xn19_a5654a72a146e1f9d7d65f7ed07a1e3b-sce7d9cf63b62ad16124-c1 xn19_add063e1ed96b51776553ad10abdb763-se0e8b4cfbf74d99a124-c1 xn19_bbb6bf9865e7e817c745dde6ff698dde-sf378fea1c0027450124-c1 xn19_c1f9bc57e998f60695706782ede529dc-sa16921e77d2906b5124-c1 xn19_da318a55992604c1282f1290eac93ef9-s41a0f695ab4e0fec124-c1 xn19_ee4a5fc00de8ea4fc3b23fff5058022e-s1e066700abf0df2e124-c1 xn19_f298baee25788acaa85688984e5a92cd-s962daf92f0a94cac124-c1 xn19_f3348d2bac955ab92db8aa5e4d8fa837-sf0bd11b9f2bf35f5124-c1 xn19_f340a6bee2b14ca6a94b2e9bb3286ef7-sf4fb3d489c329c6a124-c1 xn19_f91e3450fd77a2468c9c3bfa3e04b2a9-s22945fba6984fb2a124-c1 xn19_fc530363e86f1664973d2b1fd4d281e8-seda517a02202dedc124-c1 xn20_2c0f3f978d966222961eef3e04f830fe-s9d9f3792628bbcff124-c1 xn20_39aad30e896770f1e12a079a142c7a5c-s3793385ca67a5e59124-c1 xn20_5739da59be3095d9de7391ad8c601024-s7c05ec172f1ba724124-c1 xn20_609d03ab74bf6cd2ddf81dfce55a0498-sf40c4e417fdccc9d124-c1 xn20_6b9190f18e16098206225047b75ec43e-s0729f17b0cd2faee124-c1 xn20_835361219963c4943b406b1dd719faaa-s7a96a4d8e4d6e831124-c1 xn20_96d178486e833e8f54b795976f83e7c1-sf0c221afa5617585124-c1 xn20_c0abd769c01096bfe5a144396bac1c49-sa29871f7149cedc4124-c1 xn20_c931c1ae251005bf39598727e59ccca2-s1a0c4c697704aafd124-c1 xn20_f2c6a3c5063aec1c7fbd91e91b5c572a-sd32844c31196f1de124-c1] deletion watermark 2024-01-07 14:39:48 +0000 UTC" logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:101" sublevel=error
Lyndon-Li commented 10 months ago

@pseymournutanix These logs are expected.

The fix won't work on the backups already in "Deleting" state, if only works for the new created backup deletion request.

rnarenpujari commented 10 months ago

Thanks for the fix! I think it's worth noting that you can still hit this error if your BackupRepository CRs are not in the Ready phase but that is not a common scenario I believe.

cooperspencer commented 9 months ago

@pseymournutanix These logs are expected.

The fix won't work on the backups already in "Deleting" state, if only works for the new created backup deletion request.

how can I get rid of backups that are still in "Deleting" state?

lukasertl commented 9 months ago

@pseymournutanix These logs are expected. The fix won't work on the backups already in "Deleting" state, if only works for the new created backup deletion request.

how can I get rid of backups that are still in "Deleting" state?


oc delete backup <backup-name> -n <namespace>
Lyndon-Li commented 9 months ago

To delete the backups in "Deleting" state, you can still run another velero backup delete, a new backupdeleteionrequest will be created to try the backup deletion again.