Open Mertonidas opened 4 years ago
Hi, I have the same setup and the same problem. The TiDbBackups are deleted from the cluster but the files generated in the storage are not deleted so, in the end, they will hit the space limits or grow indefinitely.
Any plan to let the BackupScheduler delete old backup generated files in the storage?
I am having the same problem as well. Any updates on this?
I think this is not properly explained in the docs.
We must set a cleanPolicy
to Delete. After setting this value files are being deleted
To make a backup of the TiDB cluster in Kubernetes, you need to create a [Backup CR](https://docs.pingcap.com/tidb-in-kubernetes/stable/backup-restore-cr#backup-cr-fields) object to describe the backup or create **a BackupSchedule CR object** to describe a scheduled backup.
https://docs.pingcap.com/tidb-in-kubernetes/stable/backup-restore-cr#backupschedule-cr-fields
The backupSchedule configuration consists of two parts. One is backupTemplate, and the other is the unique configuration of backupSchedule.
backupTemplate specifies the configuration related to the cluster and remote storage, which is the same as the spec configuration of **[the Backup CR](https://docs.pingcap.com/tidb-in-kubernetes/stable/backup-restore-cr#backup-cr-fields)**.
https://docs.pingcap.com/tidb-in-kubernetes/stable/backup-restore-cr#backup-cr-fields
.spec.cleanPolicy: The cleaning policy for the backup data when the backup CR is deleted. You can choose one from the following three clean policies:
Retain: under any circumstances, retain the backup data when deleting the backup CR.
Delete: under any circumstances, delete the backup data when deleting the backup CR.
OnFailure: if the backup fails, delete the backup data when deleting the backup CR.
If this field is not configured, or if you configure a value other than the three policies above, the backup data is retained.
Note that in v1.1.2 and earlier versions, this field does not exist. The backup data is deleted along with the CR by default. For v1.1.3 or later versions, if you want to keep this earlier behavior, set this field to Delete.
@sergiomcalzada I am not sure if I have done it correctly. I added cleanPolicy
and set it to delete under BKS template. Here is the describe for BKS
$ k describe bks/tidb-backup-schedule-s3 -n tidb-admin
Name: tidb-backup-schedule-s3
Namespace: tidb-admin
Labels: <none>
Annotations: <none>
API Version: pingcap.com/v1alpha1
Kind: BackupSchedule
Metadata:
Creation Timestamp: 2022-04-29T10:11:33Z
Generation: 7
Managed Fields:
API Version: pingcap.com/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:backupTemplate:
.:
f:backupType:
f:br:
.:
f:cluster:
f:clusterNamespace:
f:cleanPolicy:
f:s3:
.:
f:bucket:
f:endpoint:
f:provider:
f:secretName:
f:maxReservedTime:
f:schedule:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-04-29T10:11:33Z
API Version: pingcap.com/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:spec:
f:backupTemplate:
f:resources:
f:status:
.:
f:lastBackup:
f:lastBackupTime:
Manager: tidb-controller-manager
Operation: Update
Time: 2022-04-29T12:00:26Z
Resource Version: 54575871
UID: 8680029d-a8a5-487a-b2bb...
Spec:
Backup Template:
Backup Type: full
Br:
Cluster: advanced-tidb
Cluster Namespace: tidb-cluster
Clean Policy: delete
Resources:
s3:
Bucket: tidb...
Endpoint: https://nyc3.digitaloceanspaces.com
Provider: s3
Secret Name: s3-secret
Max Reserved Time: 24h
Schedule: 0 */12 * * *
Status:
Last Backup: tidb-backup-schedule-s3-2022-05-02t00-00-00
Last Backup Time: 2022-05-02T00:00:00Z
Events: <none>
My spec file looks like this:
---
apiVersion: pingcap.com/v1alpha1
kind: BackupSchedule
metadata:
name: tidb-backup-schedule-s3
namespace: tidb-admin
spec:
#maxBackups: 5
#pause: true
maxReservedTime: "24h"
schedule: "0 */12 * * *"
backupTemplate:
cleanPolicy: delete
backupType: full
br:
cluster: advanced-tidb
clusterNamespace: tidb-cluster
s3:
provider: s3
secretName: s3-secret
endpoint: https://nyc3.digitaloceanspaces.com
bucket: tidb...
My maxReservedTime: "24h"
but I still see backups that are two days old with files in them
It's a tidb operator issue and you should open at repo https://github.com/pingcap/tidb-operator. And btw, you can check if the problem is gone with tidb operator release v1.5.1.
We can close this now. We don't use TiDB any longer.
General Question
Hi,
I have installed a tidb-cluster using tidb-operator and tidb-cluster CRD. I'm using the BackupSchedule CRD with BR to take backups of tidb-cluster. I use minio as remote S3 storage, and the backups work well.
https://docs.pingcap.com/tidb-in-kubernetes/stable/backup-to-aws-s3-using-br
In the documentation appears the parameter .spec.maxBackups in the BackupSchedule CRD which determines the maximum number of backup items to be retained.
I set up maxBackups to 5. Means it that in minio will be storage just the last 5 backups and older will be deleted?
This is my BackupSchedule definition for testing purpose:
This is the log of backup-tidb-schedule-backup-minio pod:
There isn't any reference about retention operations.
No backups were deleted in minio storage. Currently, I have 20 backups in the minio bucket.
My TiDB version is v4.0.4
Thank you
Regards