Closed pseymournutanix closed 10 months ago
For the record this is using Nutanix Objects as the store which is S3 compatible.
Taking a quick look at the provided bundle, I noticed that the log content was not properly captured in the log file. Can you help to regenrate a new one that includes a full log for further troubleshooting of this issue.?
Based on current bundle, I found several error messages below recorded in describe file:
Errors:
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
Upon further examination of these errors, it apprears that Velero kept waiting for backup repository to become ready before hitting timeout.
Thank you, currently sitting like this across multiple clusters, and 2 different object stores.
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
dre-services-beta-daily-fs-20231012020026 PartiallyFailed 5 1 2023-10-12 03:00:26 +0100 BST 2d default <none>
dre-services-beta-daily-fs-20231011020025 PartiallyFailed 5 1 2023-10-11 03:00:25 +0100 BST 1d default <none>
dre-services-beta-daily-fs-20231010020023 PartiallyFailed 5 1 2023-10-10 03:00:24 +0100 BST 11h default <none>
dre-services-beta-daily-fs-20231009020022 Deleting 4 1 2023-10-09 03:00:23 +0100 BST 12h ago default <none>
dre-services-beta-daily-fs-20231008020021 Deleting 4 1 2023-10-08 03:00:21 +0100 BST 1d ago default <none>
dre-services-beta-daily-fs-20231007020020 Deleting 4 1 2023-10-07 03:00:20 +0100 BST 2d ago default <none>
dre-services-beta-daily-fs-20231006020019 Deleting 4 1 2023-10-06 03:00:19 +0100 BST 3d ago default <none>
dre-services-beta-daily-fs-20231005020018 Deleting 4 1 2023-10-05 03:00:18 +0100 BST 4d ago default <none>
dre-services-beta-daily-fs-20231004020017 Deleting 4 1 2023-10-04 03:00:17 +0100 BST 5d ago default <none>
dre-services-beta-daily-fs-20231003020016 Deleting 4 1 2023-10-03 03:00:16 +0100 BST 6d ago default <none>
dre-services-beta-daily-fs-20231002020011 Deleting 4 1 2023-10-02 03:00:11 +0100 BST 5d ago default <none>
dre-services-beta-daily-fs-20231001020010 Deleting 4 1 2023-10-01 03:00:10 +0100 BST 6d ago default <none>
dre-services-beta-daily-fs-20230930130357 Deleting 4 1 2023-09-30 14:03:58 +0100 BST 7d ago default <none>
This is a bundle from another cluster to another object store same versions of everything just this is production. bundle-2023-10-12-15-25-23.tar.gz
same issue here with different S3 providers and velero 1.12.0. I have multiple Backups stuck in deletion phase. Downgrade to velero 1.11.1 worked for me until this works again.
Deletion Attempts (1 failed):
2023-10-16 09:10:23 +0200 CEST: Processed
Errors:
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
Have the same issue:
NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
general-20231025030040 Completed 0 6 2023-10-25 06:00:40 +0300 EEST 3d backblaze <none>
general-20231024030039 Completed 0 6 2023-10-24 06:00:39 +0300 EEST 2d backblaze <none>
general-20231023030038 Completed 0 6 2023-10-23 06:00:38 +0300 EEST 1d backblaze <none>
general-20231022030037 Completed 0 6 2023-10-22 06:00:37 +0300 EEST 20h backblaze <none>
general-20231021030036 Deleting 0 6 2023-10-21 06:00:36 +0300 EEST 3h ago backblaze <none>
general-20231020222327 Deleting 0 6 2023-10-21 01:23:27 +0300 EEST 8h ago backblaze <none>
And the describe for the backup:
Name: general-20231021030036
Namespace: velero
Labels: kustomize.toolkit.fluxcd.io/name=configs
kustomize.toolkit.fluxcd.io/namespace=flux-system
velero.io/schedule-name=general
velero.io/storage-location=backblaze
Annotations: velero.io/resource-timeout=10m0s
velero.io/source-cluster-k8s-gitversion=v1.28.2+k3s1
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=28
Phase: Deleting
Errors: 0
Warnings: 6
Namespaces:
Included: simplesecrets, vaultwarden, postgresql, nodered, changedetection, mealie, media, freshrss
Excluded: <none>
Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Or label selector: <none>
Storage Location: backblaze
Velero-Native Snapshot PVs: true
Snapshot Move Data: auto
Data Mover: velero
TTL: 96h0m0s
CSISnapshotTimeout: 10m0s
ItemOperationTimeout: 4h0m0s
Hooks: <none>
Backup Format Version: 1.1.0
Started: 2023-10-21 06:00:36 +0300 EEST
Completed: 2023-10-21 06:07:03 +0300 EEST
Expiration: 2023-10-25 06:00:36 +0300 EEST
Total items to be backed up: 475
Items backed up: 475
Velero-Native Snapshots: <none included>
Deletion Attempts (1 failed):
2023-10-25 09:22:36 +0300 EEST: Processed
Errors:
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
failed to wait BackupRepository: timed out waiting for the condition
restic Backups (specify --details for more information):
Completed: 19
And the backup locations:
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
backblaze aws sgenov Available 2023-10-25 09:56:50 +0300 EEST ReadWrite true
I'm having similar problems.
Seems to affect only backups that had errors - those without errors can be deleted (or expire) without problems.
FYI, I also have the same problems with v1.12.1
I also have same issue with v1.12.2. Also affects backups without any errors.
Note, this bug affects backups having any errors or warnings. Backups that have 0 for both do get deleted. Given daily backups, this is really starting to accumulate.
Same issue here on 1.12.2 and Kubernetes 1.25.
Same issue on 1.12.2, Kubernetes 1.28
When deleting a PVB backup, Velero connects all the snapshots in the repo and delete them one by one. There may be many snapshots in one backup. There is a timeout(1 min) set for PVB to delete snapshots, this timeout may not be enough when there are many snapshots.
Thanks is there an ETA for then this will be available in 1.13-rc or any other release I would be willing to try.
Please verify it through 1.12.3 which will be RC around the end of this week
Will do. Thanks.
Have installed RC on a couple of clusters, backups in "Deleting" state didn't get cleaned (have to see going forward) but should it have ? Seeing a lot of these messages in the logs
time="2024-01-09T07:27:28Z" level=warning msg="active indexes [xr0_7_f58694685a31661731b5e54144a8c1e0-sf94a07eadb3d5ae4-c1 xr8_15_ae98b6e17639561695d7db9a89c15258-s59d3383102526521-c1 xr16_23_7fe4cd6663b93498045ebe56479db669-sddad97e6f129beeb-c1 xr24_31_da9f6c2355c075bdc751c6e9088f425e-s0828790222560474-c1 xr32_39_9303f6d188abb3665ee9e101ac052a00-s4f15164766763dba-c1 xr40_47_010c1f9b3bfec23f3365a71c650e22f5-s2bba78d09f2ba438-c1 xr48_55_4114a1ca986c92896f6024b7a8445b1f-sb0215d2bb2bd8e4d-c1 xr56_63_c6ac451bc9078ef0c22c891a1ff0671d-sf5f7210c87c5a00d-c1 xr64_71_aa5afb973f9e12c6e89b0ec00a11f777-sedb4ac9460e2c94e-c1 xr72_79_0f92610ce4b4b7ef1ed8714d51fa1a11-sb9154b88928f31f4-c1 xr80_87_dda5447c941c1fb24de2ac9c268a8193-sd2049078435926b5-c1 xs88_ad43e52eaaa54363e71e8d3f0e3a236e-s23fe1cae15abe46b-c1 xs89_d3d08dc2e6e1a10c626d438eaa1a6b42-s73461b537e170957-c1 xs90_e9d50b75b5d8a0bf89c721137bb5d1a9-s452b207a74df985e-c1 xn91_00c77141661b5c1eaffa51eaf658a677-s1562902140532ee0124-c1 xn91_088953383db6537eccac7c92b6cdc4f6-sb8be07cf297d5e73124-c1 xn91_095e73a976e919093d2e79c1cdf3f94a-s9d2dec1147c9854e124-c1 xn91_0d5983e2791767ffe19023c2e93b4df5-sdb62b1e88e1b3271124-c1 xn91_0e8bb021551097e1cc32f8808963d36c-s09bccd1dd1af59e2124-c1 xn91_0f26b6e69c57df0546c28dc2e3800b94-s006271c54393509e124-c1 xn91_1abb81122b06361162872063ccddbdfb-s316740c5deff7411124-c1 xn91_1b2e42654aa3d81b2c10fca1592c3483-s238d621d56c63994124-c1 xn91_2939a034e51e13f8b6206fb7ce84fa49-sa2a331232355b96e124-c1 xn91_3f4866bad56548befbae8777f22fd465-s7f02f2e9041859ca124-c1 xn91_40ea20e07def9a80b6eefb355c3a5fe8-s205088979b02b9f9124-c1 xn91_4d7b729a21bb56ec075db811d2904525-s8981e9e80d473879124-c1 xn91_5798249a8691bbf32bdb46b722e117fa-sb26ecb214e102113124-c1 xn91_5d9eec63719d1fd74d6fe397cb1614b9-s8c720ece069aa613124-c1 xn91_5ddd418c2cf2bbb5724890532c7d1d2b-s29d07bc67d8582e5124-c1 xn91_616a28319b1cb7ff57be6e2888d708a4-sa99ad7e299bbf998124-c1 xn91_6302cf8915d815e2fcd28fffb069af94-s2edb878eec182cf7124-c1 xn91_64033fd03dda34f7f98abb7123e6114f-se40b81a7172421ff124-c1 xn91_64b1a832c954b699e3e8ec72a0007c66-s934b7a4838b2b560124-c1 xn91_6559456c530a680b7498be06e5970363-s66dd6902fd0c76ae124-c1 xn91_8d4e00b45e01a2258f7e629210b04653-s47589f38a6957d99124-c1 xn91_901a0f394fa3c2d59a51410f109e08b8-s400ab40896d09a08124-c1 xn91_979ef7671bdd931da410faa9172a45f3-sb6f52ce2bf4a1c25124-c1 xn91_9bca7d8256222742e03a025184805b78-s10c29e84ef895315124-c1 xn91_9c897bc865c122a565d5770af27fe4c4-sd0d4a03bfbe48631124-c1 xn91_9f4885e3ab6a016878a27eefb53d5234-s61c9d13c43a838d1124-c1 xn91_a5e4cddcb298292113ecc4696d39a950-s4aaf9a8bf1ecc1bc124-c1 xn91_a9f779a59256ad1442b553616b33978f-s57fa8881a227a77a124-c1 xn91_b6916f61dbb9c9e87f284a4da1033266-s6697eca8995c81db124-c1 xn91_b96dbfb17b57f0dccc1090c654cd9c36-sf83c180aecc26b4f124-c1 xn91_bb0366d94b6bb0b2087c5658744c909a-s5a6be684d42aed52124-c1 xn91_c1790529462f24201a0f623b786acb1d-s2ff7a4409e44396c124-c1 xn91_c3b60febf927adecb5acf3832528d9c9-s4226dab371782a6a124-c1 xn91_c99fa43468685a276f257930442ef4f5-s007c65fe29a53021124-c1 xn91_cdab1fede3f1e720e8b0e88b9db5680e-s6ce7479c3b0a399b124-c1 xn91_ce6f0d524ba85ebe5b90fa460aaee742-sd01da58a9a89aa40124-c1 xn91_d093b8ac9d36407147a641f470ea11c7-sfe462d3d3135aa9c124-c1 xn91_d61eef47aa15f2113aa5715cdd6d5d7d-sf23fd6f5e0f83b79124-c1 xn91_d8f4020366311b1e70ee03a6bf0b9235-s2d2ffff5f47e23b6124-c1 xn91_dd0cbba8bab6573beb2c6b9f0290de01-s19f7182c8ca9376b124-c1 xn91_e117e94bcce6a9f4dfb26c70f5621d06-sa1b20d2cbf41343f124-c1 xn91_e8cad52a77a640d9b19f6afca7f8b4c9-s6b092814f8f3941e124-c1 xn91_ef0488a86a64bdd4be1038f9a8509fbc-se4cbd109121f5c3b124-c1 xn91_f951c0c3b2bd4f340981139c738f4718-s02f8a124453b079e124-c1 xn91_fa62317f4694c65632234a6d4f347ce4-s527d0af1f07213a3124-c1 xn91_fd5af76af5b4a8757f3b0667fbf09829-s1166e9acd19fb132124-c1 xn92_01f6f5b5b13c91e90ea3d7a6434d7169-s3051a7947aa26b99124-c1 xn92_05919e27f1aacab0d4c7e962d705c6ff-s0c3dced090c15324124-c1 xn92_076962cc7cc162b006d59daf9845fc00-sb80181e3ecad026c124-c1 xn92_09144b140e94bd911531e33b20ff46c5-sb98542235c23d05f124-c1 xn92_0a4d0685ef9b5539adb9d267ca176f78-s6587788d50e859e1124-c1 xn92_0aaa6e56a5f331d5ae49f6fe36630fae-s2f00fb620c7c76f3124-c1 xn92_0da2a5511d947ca2a58857eb7f0e23eb-sced50090a04dddfe124-c1 xn92_10a3ff88b4ffc83fce09c51fc3818199-scd3e63952c8e716a124-c1 xn92_131a26d4264fbd4a967f481a101178ff-s57f87740c8ac787a124-c1 xn92_1392aa70890caf9292be692e7c731f98-s2203951c95e4721f124-c1 xn92_14f11e641f92a07694a9be58b158403b-s9499ac688f236cb7124-c1 xn92_16499cc0774608f79dfb3dfff12c9a35-s7277fdd14d282808124-c1 xn92_16d0e1b57f8a156fc29372d08d960c33-s33fd1ecca1a31db1124-c1 xn92_1852c937828b6eed4d89dc0739382578-s287a077cb32a55e2124-c1 xn92_22fd1d2495a38f4a9478e70bb02be0b4-s1e0b76a111ca0758124-c1 xn92_2321976171693fd888ba97222165ce7e-s07f15e851fda4c3f124-c1 xn92_28eb1aecde79ab524c56703bf546f7b4-s5de9bf1120be4d6f124-c1 xn92_2cae00b7528d75707826ce0935d4e166-sbf98ebc4b52de286124-c1 xn92_2fdb906cb422b842bc8b73dbdfa462d5-se8b968fb3636c384124-c1 xn92_33d7be494e8fcc82f3733e49bd078467-s318b6d28464f46cb124-c1 xn92_3c30d2f829543dbb3b0911ea3e41244f-sae9509085d8208b7124-c1 xn92_3d4f9894edb4f1c9bf494e761f228a61-s76671735a2038a0a124-c1 xn92_3db22461fc9109258cc5a692c31e52e8-s80197b52e2438065124-c1 xn92_4b7df11436a8cfa491422f4effcab924-s0800519dfad19cd0124-c1 xn92_4ec5d0589fa724022a81531c2921d71f-s776d08d785d78b55124-c1 xn92_531afb392aba110d84e1e227b353b926-sb59f3f4aa16e84a3124-c1 xn92_5323eea411fdf1f2e5623034360d5319-sef16a98b632b9346124-c1 xn92_57f1e206cc363bd04c416e75f14de2c3-sd51b625cd54e8d60124-c1 xn92_5e8cb6f3b5c86b00051ac606cf6252a7-sf74ed435d701cd46124-c1 xn92_5ee188e7f4989e9e76a327958e090b17-se2f6119baf5c5781124-c1 xn92_621a98623ae8d30340efbfd8c2da0e96-s7a0b3378a0ecbdf5124-c1 xn92_635fbf9ec23908edba2484fda70a699f-sbbd87fff23555d90124-c1 xn92_67f0afe39746445b1c4faa3598bb9cf9-s89cff27221dfa696124-c1 xn92_694ca465dc4e9a4d80f3349cddde9b55-s74d02317d563ca1e124-c1 xn92_6d74c850faa980f622d07145107f6e96-s3e8cba70a361e95e124-c1 xn92_760939199c4fcab6e3b5915748148d2d-s2fe63ed8d2de6fc7124-c1 xn92_8012f97b0b0ecad5e6288a10c5b5287f-s5ebfa232471a8610124-c1 xn92_812badfb43566296d6e5a412567bdefa-sdd0dd9d3688fecb1124-c1 xn92_82e706eba6394f5ca6ba501a357566b9-sdb0f5f06418bcc74124-c1 xn92_83a6eab3b0e0d87250aa6279194e2510-se6bc509f84fcdb88124-c1 xn92_85def16fbc6808a42c87ad535d8d9c72-sbdaf8cbc4095aa9c124-c1 xn92_8b4a9b872a08b2baf5a196b3f3417517-sfac726bb7bf84dd9124-c1 xn92_8dc71b5858a417921fae6f2e43ef3281-s4fce9355e2003802124-c1 xn92_90d5e37646e93f889ae93f1e20639de8-sfd1702e20c7e078d124-c1 xn92_9108a7ee2d4331f19a88adef7627d3b3-s4ebca2cc824bf636124-c1 xn92_92b43227970b1d0b4a1cb5f019ed6e27-s49729682f937d645124-c1 xn92_9f24bb9befc70565b92c298da165b80f-s7bf2ab2c65969cce124-c1 xn92_a6fcc89acc77c93c265a05a2b764cbef-s3a9c7ef3b150cc27124-c1 xn92_a88d34c53f29f259beaf6cd9593d90f6-s0c9689ad98e03fec124-c1 xn92_b1053d80e2bec89ab0fc804d07975080-sc849c3b5f78bd307124-c1 xn92_b4b21506ef78020a980f6eb1caeec516-s9891973c5fabc6cc124-c1 xn92_b75334bd2e932385b9569293be652d40-s657da6033422aa24124-c1 xn92_b9d46124b8688de2d7a1e549144b5ebd-sfebb592335979d92124-c1 xn92_ba937eb7610538aa2a63094413a14a82-s277e372ee78fbacd124-c1 xn92_bc129c3701ab89cdbc0310e6b29fccbe-s6e97455c8020bdf9124-c1 xn92_bdefbc1d035ca2b33956417bba974a02-s0d1ec48c4db49949124-c1 xn92_c45c8680f2b4a2c67116041733158726-sc7c00bec54435be3124-c1 xn92_d0d1cc98abaf3025460427c74aeae2ae-s0e620d1475a0ca1a124-c1 xn92_d630a6b706fc881524cd6ac3710e5d6c-sc74046dbbd3f5445124-c1 xn92_d7ab8bff47dc54f2199fe19e1bb8837a-s7c75284e91979cf3124-c1 xn92_d9545100a4bf4bae86320a9e4bb46cc9-s00954214c9878c02124-c1 xn92_da52e8fff7a37116e199abaea444af37-sb8095b626415c586124-c1 xn92_e636c4193db816fc0533389522f6e2e6-s584543d35567da88124-c1 xn92_e8af9e9772ebe34c591be7be22dcb5fd-s552cd05c4875d289124-c1 xn92_e8e6eb2e7eabbb28754dab3babc936f6-s5d60e85d32e705ac124-c1 xn92_eb924e396e8bf55f2a5681d2262846d0-s48a8be89ac5a6fc9124-c1 xn92_f57bd3e4ca7b3ebfcb358d012c57927a-se4b59447b0288c83124-c1 xn92_f7089a5d55129ad567bfa2fe606c1b7f-sc7faac670846df31124-c1 xn92_f8b13ec2b0f914fcc600af7321c888c5-s4947abf5b2d5b4a4124-c1 xn92_fee39eae12f2d68d0195fbbf76e22422-s69ba44b35dcfd52a124-c1 xn92_ffe4f70bd6ce0e470fa7506c7d20150d-s4c541330ce76b827124-c1] deletion watermark 2024-01-07 16:39:49 +0000 UTC" logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:101" sublevel=error
time="2024-01-09T07:27:29Z" level=warning msg="active indexes [xr0_7_245e1b4348912db61b5abbd46315a3d5-se5f712cf4a5fc6e4-c1 xr8_15_d5fe1d7199c37daa55ad119183a8eba5-s92998149d2dafae5-c1 xs16_804bf65ed82aa6e8422cff2a162050fe-s23dfefa75412e185-c1 xs17_d81ec2467f67997fcff1474a80b0df37-s7d359246f51c95f6-c1 xs18_6c61202ab708649e78269ec604855f2b-s0084a5edcb4a7466-c1 xn19_013652c5ec2bc6e1b7175da639b523db-sdef68bc98d5fecb1124-c1 xn19_0239ed810cdf74f6a44d2a515df39e7e-s485dd74e41a9890d124-c1 xn19_04e723c93e3dfaa099d77e001f6dfa03-s8bdd69d8fae5b853124-c1 xn19_0510a3fd41040c3c005feab545ae9130-s1846ae54e2da92cb124-c1 xn19_0fbd96e9424cbd8b9ac09d4cf7fae1df-sbe0a8ea7577e96d7124-c1 xn19_1e0ad81d16c580698232dd586dfa2fb6-s787a00cdc6c58e40124-c1 xn19_38a11adebfef880149b22a28b90c0023-sa26e1126ef712813124-c1 xn19_38a77965c4752223a403de738a75022a-s47453901de009bb7124-c1 xn19_766c8ffc189938f3bc2bb5a9d50b0c0c-s5c752e1e120abdc0124-c1 xn19_93ad42ff0ff8a90d10c502f8400410c8-sbdec7d94d369499d124-c1 xn19_a5654a72a146e1f9d7d65f7ed07a1e3b-sce7d9cf63b62ad16124-c1 xn19_add063e1ed96b51776553ad10abdb763-se0e8b4cfbf74d99a124-c1 xn19_bbb6bf9865e7e817c745dde6ff698dde-sf378fea1c0027450124-c1 xn19_c1f9bc57e998f60695706782ede529dc-sa16921e77d2906b5124-c1 xn19_da318a55992604c1282f1290eac93ef9-s41a0f695ab4e0fec124-c1 xn19_ee4a5fc00de8ea4fc3b23fff5058022e-s1e066700abf0df2e124-c1 xn19_f298baee25788acaa85688984e5a92cd-s962daf92f0a94cac124-c1 xn19_f3348d2bac955ab92db8aa5e4d8fa837-sf0bd11b9f2bf35f5124-c1 xn19_f340a6bee2b14ca6a94b2e9bb3286ef7-sf4fb3d489c329c6a124-c1 xn19_f91e3450fd77a2468c9c3bfa3e04b2a9-s22945fba6984fb2a124-c1 xn19_fc530363e86f1664973d2b1fd4d281e8-seda517a02202dedc124-c1 xn20_2c0f3f978d966222961eef3e04f830fe-s9d9f3792628bbcff124-c1 xn20_39aad30e896770f1e12a079a142c7a5c-s3793385ca67a5e59124-c1 xn20_5739da59be3095d9de7391ad8c601024-s7c05ec172f1ba724124-c1 xn20_609d03ab74bf6cd2ddf81dfce55a0498-sf40c4e417fdccc9d124-c1 xn20_6b9190f18e16098206225047b75ec43e-s0729f17b0cd2faee124-c1 xn20_835361219963c4943b406b1dd719faaa-s7a96a4d8e4d6e831124-c1 xn20_96d178486e833e8f54b795976f83e7c1-sf0c221afa5617585124-c1 xn20_c0abd769c01096bfe5a144396bac1c49-sa29871f7149cedc4124-c1 xn20_c931c1ae251005bf39598727e59ccca2-s1a0c4c697704aafd124-c1 xn20_f2c6a3c5063aec1c7fbd91e91b5c572a-sd32844c31196f1de124-c1] deletion watermark 2024-01-07 14:39:48 +0000 UTC" logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:101" sublevel=error
@pseymournutanix These logs are expected.
The fix won't work on the backups already in "Deleting" state, if only works for the new created backup deletion request.
Thanks for the fix! I think it's worth noting that you can still hit this error if your BackupRepository CRs are not in the Ready phase but that is not a common scenario I believe.
@pseymournutanix These logs are expected.
The fix won't work on the backups already in "Deleting" state, if only works for the new created backup deletion request.
how can I get rid of backups that are still in "Deleting" state?
@pseymournutanix These logs are expected. The fix won't work on the backups already in "Deleting" state, if only works for the new created backup deletion request.
how can I get rid of backups that are still in "Deleting" state?
oc delete backup <backup-name> -n <namespace>
To delete the backups in "Deleting" state, you can still run another velero backup delete
, a new backupdeleteionrequest will be created to try the backup deletion again.
What steps did you take and what happened: Backup delete requested
What did you expect to happen: The backup was removed and not in
Deleting
Debug bundle attached
Anything else you would like to add:
Environment:
velero version
): 1.12.0velero client config get features
):kubectl version
): 1.22/etc/os-release
):bundle-2023-10-09-07-43-06.tar.gz
Vote on this issue!