Is your feature request related to a problem? Please describe.
背景:milvus version 2.2.12,从集群A备份恢复到集群B,集群A为Milvus-standalone,集群B为k8s集群使用helm安装
问题:集群A使用./milvus_backup create -n my_backup之后,在本机minio目录下将my_backup拷贝到集群B使用的S3的backup目录下,修改使用集群B的backup.yaml之后使用命令./milvus_backup restore -n my_backup报错没有找到my_backup,使用/milvus_backup list 找不到对应backup
Describe the solution you'd like.
操作步骤::1../milvus_backup create -n my_backup,拷贝my_backup至本地
2.拷贝my_backup目录至S3的backup目录下
3.虚机执行./milvus_backup restore -n my_backup,报错faild to get backup,msg="not found"
4.虚机执行/milvus_backup list,报错fail to read backup,path=backup/my_backup,以及报错read backup meta file not exist,path=bakcup/my_backup/meta_backup_metajson
Describe an alternate solution.
集群A使用的yaml如下:
Configures the system log output.
log:
level: info # Only supports debug, info, warn, error, panic, or fatal. Default 'info'.
console: true # whether print log to console
file:
rootPath: "logs/backup.log"
address: localhost # Address of MinIO/S3
port: 9000 # Port of MinIO/S3
accessKeyID: minioadmin # accessKeyID of MinIO/S3
secretAccessKey: minioadmin # MinIO/S3 encryption string
useSSL: false # Access to MinIO/S3 with SSL
useIAM: false
iamEndpoint: ""
bucketName: "a-bucket" # Milvus Bucket name in MinIO/S3, make it the same as your milvus instance
rootPath: "files" # Milvus storage root path in MinIO/S3, make it the same as your milvus instance
backupBucketName: "a-bucket" # Bucket name to store backup data. Backup data will store to backupBucketName/backupRootPath
backupRootPath: "backup" # Rootpath to store backup data. Backup data will store to backupBucketName/backupRootPath
thread pool to copy data for each collection backup, default 100.
# which means if you set backup.parallelism = 2 backup.copydata.parallelism = 100, there will be 200 copy executing at the same time.
# reduce it if blocks your storage's network bandwidth
parallelism: 128
keepTempFiles: false
restore:
Collection level parallelism to restore
Only change it > 1 when you have more than one datanode.
Because the max parallelism of Milvus bulkinsert is equal to datanodes' number.
parallelism: 2
集群B使用的yaml如下:
Configures the system log output.
log:
level: info # Only supports debug, info, warn, error, panic, or fatal. Default 'info'.
console: true # whether print log to console
file:
rootPath: "logs/backup.log"
address: B集群S3地址 # Address of MinIO/S3
port: 8060 # Port of MinIO/S3
accessKeyID: ak # accessKeyID of MinIO/S3
secretAccessKey: sk # MinIO/S3 encryption string
useSSL: false # Access to MinIO/S3 with SSL
useIAM: false
iamEndpoint: ""
bucketName: "bucketb" # Milvus Bucket name in MinIO/S3, make it the same as your milvus instance
rootPath: # Milvus storage root path in MinIO/S3, make it the same as your milvus instance
only for azure
backupAccessKeyID: ak # accessKeyID of MinIO/S3
backupSecretAccessKey: sk # MinIO/S3 encryption string
backupBucketName: "bucketb" # Bucket name to store backup data. Backup data will store to backupBucketName/backupRootPath
backupRootPath: "backup" # Rootpath to store backup data. Backup data will store to backupBucketName/backupRootPath
thread pool to copy data for each collection backup, default 100.
# which means if you set backup.parallelism = 2 backup.copydata.parallelism = 100, there will be 200 copy executing at the same time.
# reduce it if blocks your storage's network bandwidth
parallelism: 128
keepTempFiles: false
restore:
Collection level parallelism to restore
Only change it > 1 when you have more than one datanode.
Because the max parallelism of Milvus bulkinsert is equal to datanodes' number.
Is your feature request related to a problem? Please describe.
背景:milvus version 2.2.12,从集群A备份恢复到集群B,集群A为Milvus-standalone,集群B为k8s集群使用helm安装 问题:集群A使用./milvus_backup create -n my_backup之后,在本机minio目录下将my_backup拷贝到集群B使用的S3的backup目录下,修改使用集群B的backup.yaml之后使用命令./milvus_backup restore -n my_backup报错没有找到my_backup,使用/milvus_backup list 找不到对应backup
Describe the solution you'd like.
操作步骤::1../milvus_backup create -n my_backup,拷贝my_backup至本地 2.拷贝my_backup目录至S3的backup目录下 3.虚机执行./milvus_backup restore -n my_backup,报错faild to get backup,msg="not found" 4.虚机执行/milvus_backup list,报错fail to read backup,path=backup/my_backup,以及报错read backup meta file not exist,path=bakcup/my_backup/meta_backup_metajson
Describe an alternate solution.
集群A使用的yaml如下:
Configures the system log output.
log: level: info # Only supports debug, info, warn, error, panic, or fatal. Default 'info'. console: true # whether print log to console file: rootPath: "logs/backup.log"
http: simpleResponse: true
milvus proxy address, compatible to milvus.yaml
milvus: address: localhost port: 19530 authorizationEnabled: false
tls mode values [0, 1, 2]
0 is close, 1 is one-way authentication, 2 is two-way authentication.
tlsMode: 0 user: "root" password: "Milvus"
Related configuration of minio, which is responsible for data persistence for Milvus.
minio: cloudProvider: "minio" # remote cloud storage provider: s3, gcp, aliyun, azure
address: localhost # Address of MinIO/S3 port: 9000 # Port of MinIO/S3 accessKeyID: minioadmin # accessKeyID of MinIO/S3 secretAccessKey: minioadmin # MinIO/S3 encryption string useSSL: false # Access to MinIO/S3 with SSL useIAM: false iamEndpoint: ""
bucketName: "a-bucket" # Milvus Bucket name in MinIO/S3, make it the same as your milvus instance rootPath: "files" # Milvus storage root path in MinIO/S3, make it the same as your milvus instance
only for azure
backupAccessKeyID: minioadmin # accessKeyID of MinIO/S3 backupSecretAccessKey: minioadmin # MinIO/S3 encryption string
backupBucketName: "a-bucket" # Bucket name to store backup data. Backup data will store to backupBucketName/backupRootPath backupRootPath: "backup" # Rootpath to store backup data. Backup data will store to backupBucketName/backupRootPath
backup: maxSegmentGroupSize: 2G parallelism: 2 # collection level parallelism to backup copydata:
thread pool to copy data for each collection backup, default 100.
keepTempFiles: false
restore:
Collection level parallelism to restore
Only change it > 1 when you have more than one datanode.
Because the max parallelism of Milvus bulkinsert is equal to datanodes' number.
parallelism: 2 集群B使用的yaml如下:
Configures the system log output.
log: level: info # Only supports debug, info, warn, error, panic, or fatal. Default 'info'. console: true # whether print log to console file: rootPath: "logs/backup.log"
http: simpleResponse: true
milvus proxy address, compatible to milvus.yaml
milvus: address: 集群B地址 port: 19530 authorizationEnabled: false
tls mode values [0, 1, 2]
0 is close, 1 is one-way authentication, 2 is two-way authentication.
tlsMode: 0 user: "root" password: "Milvus"
Related configuration of minio, which is responsible for data persistence for Milvus.
minio: cloudProvider: "aws" # remote cloud storage provider: s3, gcp, aliyun, azure
address: B集群S3地址 # Address of MinIO/S3 port: 8060 # Port of MinIO/S3 accessKeyID: ak # accessKeyID of MinIO/S3 secretAccessKey: sk # MinIO/S3 encryption string useSSL: false # Access to MinIO/S3 with SSL useIAM: false iamEndpoint: ""
bucketName: "bucketb" # Milvus Bucket name in MinIO/S3, make it the same as your milvus instance rootPath: # Milvus storage root path in MinIO/S3, make it the same as your milvus instance
only for azure
backupAccessKeyID: ak # accessKeyID of MinIO/S3 backupSecretAccessKey: sk # MinIO/S3 encryption string
backupBucketName: "bucketb" # Bucket name to store backup data. Backup data will store to backupBucketName/backupRootPath backupRootPath: "backup" # Rootpath to store backup data. Backup data will store to backupBucketName/backupRootPath
backup: maxSegmentGroupSize: 2G parallelism: 2 # collection level parallelism to backup copydata:
thread pool to copy data for each collection backup, default 100.
keepTempFiles: false
restore:
Collection level parallelism to restore
Only change it > 1 when you have more than one datanode.
Because the max parallelism of Milvus bulkinsert is equal to datanodes' number.
parallelism: 2
Anything else? (Additional Context)
No response