Closed szaouam closed 6 years ago
I'd be curious to see the benchmarks on bzip2 taking "a long time" for various archive sizes
Hello,
I used the fs plugin to backup up the Cloud Foundry blobstore.
I backuped the /var/vcap/store/shared folder.
The size of the backuped folder is 105 GB.
Shield took 7h to perform the backup.
The final size of the backup is 101,1 GB.
Best regards
Where was the store for the backup pointed? Could the slowness be related to network upload speed?
Sent from my iPhone
On May 5, 2017, at 3:54 AM, Soufian Zaouam notifications@github.com wrote:
Hello,
I used the fs plugin to backup up the Cloud Foundry blobstore.
I backuped the /var/vcap/store/shared folder.
The size of the backuped folder is 105 GB.
Shield took 7h to perform the backup.
The final size of the backup is 101,1 GB.
Best regards
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
@geofffranks I am using an internal s3 storage service. I uploaded the same amount of data using an s3 client. It took about 1 hour.
Which S3 client?
Also, what is backing your s3 storage API? (assuming it's not on-prem AWS 😉)
@jhunt , i used s3cmd .
Right i am not using on-prem AWS :) . I am using an internal S3 storage (20 MB/S download speed).
Just to double-check, when you uploaded the same amount of data, were you standing on the same VM that was executing the s3 plugin in the slow-scenario?
@jhunt , yes
Reopening against the SHIELD project proper.
Hello,
I am using the shield-boshrelease to backup the Cloud Foundry blobstore.
The release are using bzip2 to compress backups so it may take a long time (for huge amount of data).
My question is : what about to have a new feature that enable to change the compress algorithm (user can choose between tar,bzip2,..), so for example having it as a parameter in the bosh-release manifest.
Best regards,