Open GoogleCodeExporter opened 9 years ago
Yes:
duplicati- *b* 98b6b09770134370b226d39a83ce40dc.dblock.zip.aes
duplicati-20140130T180451Z.dlist.zip.aes
duplicati- *i* 78d47cef3cdb4cd69a67a99814dce4b9.dindex.zip.aes
Original comment by kenneth@hexad.dk
on 31 Jan 2014 at 1:09
Sorry, hit send too early. And I'm obviously confused, sorry.
So, the AWS S3 Lifecycle Rule that pushes the big S3 objects to glacier has
this filter:
duplicati-d* (d as in dog, not b as in boy)
So duplicati objects named
duplicati-d*
should be moved to Glacier. The larger data files are dblocks, which are
duplicati-b*
so shouldn't the Lifecycle Rule be?:
duplicati-b* (b as in boy)
Original comment by pjala...@gigalock.com
on 31 Jan 2014 at 1:15
Yes, "b" for "blocks", I had missed that there were instructions for "d", I
have updated the page on duplicati.com.
Original comment by kenneth@hexad.dk
on 31 Jan 2014 at 1:20
Phew, awesome, great, thanks. They should be moving to Glacier later today...
Original comment by pjala...@gigalock.com
on 31 Jan 2014 at 1:22
Hi again,
Just following up on this. It took me a while to figure it out, that the S3
Bucket Lifecycle Rule prefix:
a. includes the entire "path" down to the "folder" with the duplicati-* files,
with no leading slashes, and it
b. includes no wildcards.
This Lifecycle Rule prefix is now working for me:
computer-user/duplicati-b
where "computer-user" is the name of a top-level "folder" within my bucket, and
the "duplicati-b" files are directly in that folder.
In my S3 AWS console, all of my "duplicati-b" S3 files, and only those files,
now indicate:
Storage Class: Glacier
It should be noted also that those Glacier files are not found at all under the
Glacier AWS console (which I believe is as designed).
Not sure, but I think
http://www.duplicati.com/news/howtouseglaciertostorebackups
could be updated with some of these nuances.
Thanks,
Pete
Original comment by pjala...@gigalock.com
on 4 Feb 2014 at 12:26
Hi again,
So the backup to AWS S3 and the S3 Lifecycle move to Glacier seems to be
working well. Now I'd like to do some "tests" and "restores".
I ran:
Duplicati.CommandLine.exe test s3://<bucket>/<folder> --s3-server-name=s3.amazonaws.com --use-ssl --aws_access_key_id=<accesskey> --aws_secret_access_key=<secretaccesskey> --passphrase=<passphrase>
Listing remote folder ...
Downloading file (11.12 KB) ...
Downloading file (863 bytes) ...
Downloading file (1.01 KB) ...
Operation Get with file duplicati-b<random>.dblock.zip.aes attempt 1 of 5
failed with message: The operation
is not valid for the object's storage class => The operation is not valid for the object's storage class
I went to my AWS S3 Management console, looked in my bucket and folder, found
the dblock file, right clicked it, left clicked "Initiate Restore". In the
pop-up, I entered "1" day. A few hours later the dblock file Details showed:
Storage Class :Glacier
Restored until Tue, 11 Feb 2014 00:00:00 GMT [Modify]
Problem is, when I run the "test" again, it looks for different file(s) to test
because of the random nature of the test function.
Is there a way to specify the dblock file to test?
While I have that dblock file restored from Glacier to S3, I'd like to test
some restores from it--is there a way to list the files that are in that dblock
file?
Thanks,
Pete
Original comment by pjala...@gigalock.com
on 9 Feb 2014 at 2:40
Original issue reported on code.google.com by
d...@simplycharlottemason.com
on 21 Aug 2012 at 1:29