Closed taromurao closed 12 years ago
great idea, I'll add an option to rake task
This feature should be in place in 0.4.2 version. Please check it for aws/s3 gem if you have some time.
Cool. That was a quick update.
hi alexkravets, I appreciate that you used my code. When I was viewing the updated code, I noticed that I was quite much messing up your code. Please kindly apply below changes. My apology, I am a newbie to github activities. The code could be more elegantly refactored if an expert checks, but this is the optimum I can achieve at the moment.
----- code to be replaced -----
bucket = Bucket.find(bucket_name)
object_keys = [] bucket.objects.each { |o| object_keys << o.key }
object_keys = object_keys.sort
excess = object_keys.count - files_number_to_leave
if excess > 0 (0..excess-1).each { |i| S3Object.find(object_keys[i], bucket_name).delete } end
---- code to replace ----
object_keys = Bucket.find(bucket_name).objects.map { |o| o.key }.sort
(0..excess-1).each { |i| S3Object.find(object_keys[i],bucket_name).delete } if (excess = object_keys.count - files_number_to_leave) > 0
Hello Taro, thanks for testing this!
Here is the way you should go with this:
After that I can easily merge your changes with a single click. This is the way we usually go with github.
Thanks!
On Fri, Oct 5, 2012 at 12:25 PM, Taro Murao notifications@github.comwrote:
hi alexkravets, I appreciate that you used my code. When I was viewing the updated code, I noticed that I was quite much messing up your code. Please kindly apply below changes. My apology, I am a newbie to github activities. The code could be more elegantly refactored if an expert checks, but this is the optimum I can achieve at the moment.
----- code to be replaced -----
bucket = Bucket.find(bucket_name)
object_keys = [] bucket.objects.each { |o| object_keys << o.key }
object_keys = object_keys.sort
excess = object_keys.count - files_number_to_leave
if excess > 0 (0..excess-1).each { |i| S3Object.find(object_keys[i], bucket_name).delete } end
---- code to replace ----
object_keys = Bucket.find(bucket_name).objects.map { |o| o.key }.sort
(0..excess-1).each { |i| S3Object.find(object_keys[i],bucket_name).delete } if (excess = object_keys.count - files_number_to_leave) > 0
— Reply to this email directly or view it on GitHubhttps://github.com/alexkravets/heroku-mongo-backup/issues/10#issuecomment-9170022.
Thanks. I will do as in your advice, but may take a few days or several. I am trying to balance my work/open source activities and home duties.
When backing up, I guess often users prefer to keep limited number of backups. Sure it is not such a heavy task to write another script to clean up the back up bucket, but is't it more convenient to simply integrate the cleaning up script with the backup script? Let me know your thought, too.
I made a simple script and run on Heroku as another scheduler task at the moment:
require 'aws/s3' include AWS::S3
MAX_BACKUPS = 7
BUCKET_NAME = BACKUP_BUCKET_NAME
Base.establish_connection!( :access_key_id => 'AWS_ACCESS_KEY_ID', :secret_access_key => 'AWS_SECRET_ACCESS_KEY' )
bucket = Bucket.find(BUCKET_NAME)
object_keys = []
bucket.objects.each{|o| object_keys << o.key}
object_keys = object_keys.sort
excess = object_keys.count - MAX_BACKUPS
if excess > 0 (0..excess-1).each{|i| S3Object.find(object_keys[i],BUCKET_NAME).delete} end