Open stefangweichinger opened 1 year ago
hey,
i believe you can back up to google cloud storage, i did a quick check using a minimal configuration like below.
org "MyConfig"
infofile "/amanda/state/curinfo"
logdir "/amanda/state/log"
indexdir "/amanda/state/index"
dumpuser "amandabackup"
amrecover_changer "changer"
define dumptype simple-gnutar-local {
auth "local"
compress none
program "GNUTAR"
}
device_property "S3_HOST" "commondatastorage.googleapis.com"
device_property "S3_ACCESS_KEY" "<access_key>" # Your S3 Access Key
device_property "S3_SECRET_KEY" "<secret_key>" # Your S3 Secret Key
device_property "S3_SSL" "NO" # you can enable this if you have CA certs.
tpchanger "chg-multi:s3:<bucket_name>/<folder_name>/<slot-1" # Number of tapes(volumes)
changerfile "s3-statefile"
tapetype S3
define tapetype S3 {
comment "S3 Bucket"
length 10240 gigabytes # Bucket size 10TB
}
manually labelling volume
amlabel MyConfig MyConfig-1 slot 1
and amdump should go through.
@prajwaltr93 thanks a lot, sounds good, and I will test asap. A quick look let's me ask:
I think I don't have access key and secret key.
My service account key file looks like:
{
"type": "service_account",
"project_id": "myproject",
"private_key_id": "7c82cxxxx",
"private_key": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADAgFxxxxxxx\nln-----END PRIVATE KEY-----\n",
"client_email": "some@my.tld",
"client_id": "someid",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/bla.iam.gserviceaccount.com"
}
I don't see how to set the device_property values using that. Can you help here?
Maybe I need a different kind of key(s) from upstream?
I think I need something like this: https://docs.simplebackups.com/storage-providers/ghsg5GE15AMwMo1qFjUCXn/google-cloud-storage-s3-compatible-credentials/8ZKUSSJRJxA4mU4VdxvRfo
Asked the responsible person to generate me those keys.
Using access_key
and secret_key
should be the most straightforward approach. but a quick look at code at s3.h reveals that amanda does support other authentication methods that uses refresh_token
, client_id
and client_secret
to fetch access_token
which is then used to perform actual request if STORAGE_API
is OAUTH2
i.e can be specified like
device_property "STORAGE_API" "OAUTH2"
reference here .
we should be able to set these as device_property
, referring here
i don't have these kinds of keys, so can't really test this hypothesis, but i hope this helps.
@prajwaltr93 thanks for investigating. Interesting, but not yet 100% matching, I will see if I can figure out something.
I think I was able to create the keys via CLI by doing "gcloud auth application-default login", I now have a json like:
/root/.config/gcloud/application_default_credentials.json
{
"client_id": "xxxx",
"client_secret": "yyyyy",
"refresh_token": "zzzzz",
"type": "authorized_user"
}
Now I try to configure a third "storage" in my amanda.conf
using a changer like:
define changer cloud {
tpchanger "chg-multi:s3:gs://someprefix_backup-daily/demotapes/"
device_property "CLIENT_ID" "xxxx"
device_property "CLIENT_SECRET" "yyyyy"
device_property "PROJECT_ID" "ssssss"
device_property "REFRESH_TOKEN" "yyyyyy"
device_property "S3_HOST" "commondatastorage.googleapis.com"
device_property "STORAGE_API" "OAUTH2"
#device_property "VERBOSE" "true"
changerfile "s3-statefile"
}
My bucket is named like this: "gs://someprefix_backup-daily/". And I wonder how to configure that, currently I get replies like:
slot 1: While creating new S3 bucket: The specified bucket is not valid.: Invalid bucket name: 'gs:' (InvalidBucketName) (HTTP 400)
all slots have been loaded
If I remove the "gs://" amanda creates a local subdir ... not useful ...
So it seems I am close ... thanks @prajwaltr93
my latest try is
tpchanger "chg-multi:s3:project_id-sb/amanda_vtapes/slot-{1..10}"
Still no success, amcheck simply times out now.
I start with the buckets returned by:
# gcloud storage ls
gs://project_id-sb/
gs://project_id-sb_backup-daily/
I don't know where that "-sb" comes from ... might come from the admin creating it for us.
More tomorrow.
Current error msg with amcheck
: "While creating new S3 bucket: Unknown S3 error (None) (HTTP 400)"
my latest try is
tpchanger "chg-multi:s3:project_id-sb/amanda_vtapes/slot-{1..10}"
Still no success, amcheck simply times out now.
I start with the buckets returned by:
# gcloud storage ls gs://project_id-sb/ gs://project_id-sb_backup-daily/
I don't know where that "-sb" comes from ... might come from the admin creating it for us.
More tomorrow.
from what i know bucket names and '-' don't go well together. '_' shouldn't be a problem
Current error msg with
amcheck
: "While creating new S3 bucket: Unknown S3 error (None) (HTTP 400)"
configurations you are trying with seems right i think, not sure what exactly is causing this issue. but i will be getting my hands on different types of auth credentials apart from access_key and secret_key like client_id
, client_secret
etc. will be adding any findings here if i make any breakthrough. thank you for posting your findings here.
I checked the debug logs right now and find:
Wed Mar 22 12:52:20.787223120 2023: pid 2660724: thd-0x558bc8e79e00: amcheck-device: Connection #0 to host (nil) left intact
Wed Mar 22 12:52:20.787237979 2023: pid 2660724: thd-0x558bc8e79e00: amcheck-device: data in 91: {
"error": "invalid_grant",
"error_description": "Token has been expired or revoked."
}
Maybe I have to use new credentials, maybe the permissions on the buckets aren't enough (very likely, I already filed a ticket).
i think as a quick test to see if creds work would be to perform following curl request:
curl -d "cliend-id=x&client_secret=y&refresh_token=z&grant_type=refresh_token" -X POST https://accounts.google.com/o/oauth2/token
tried your command, got "invalid grant". Reran my stuff with "gcloud auth application-default login", that lead me to some "allow Google Auth library" stuff in the browser and some magic connection to my personal google account. I don't understand this fully, that's why I had disabled that again 2 days ago.
Now the test command succeeds, at least I think so.
Used the new credentials in amanda.conf.
amcheck-device.debug looks different now, amcheck never succeeds, though.
I think it tries to create bucket(s) and fails ... I will see what I can quote here without publishing secrets.
No success.
I have:
define changer cloud {
tpchanger "chg-multi:s3:mybackup-prod-sb/vtapes/slot-{1..9}"
device_property "CLIENT_ID" "xxxx"
device_property "CLIENT_SECRET" "yyyy"
#device_property "CREATE_BUCKET" "YES"
device_property "MAX_RECV_SPEED" "1000000" # bytes per second
device_property "MAX_SEND_SPEED" "1000000" # bytes per second
#device_property "NB_THREADS_BACKUP" "4" # threads
device_property "PROJECT_ID" "mybackup-prod"
device_property "REFRESH_TOKEN" "zzzzzz"
device_property "S3_HOST" "commondatastorage.googleapis.com"
#device-property "S3_MULTI_PART_UPLOAD" "YES"
device-property "S3_SSL" "YES"
device_property "STORAGE_API" "OAUTH2"
device_property "VERBOSE" "true"
changerfile "s3-statefile"
}
define storage cloud {
tpchanger "cloud"
LABELSTR "cloud-[0-9][0-9]*"
autolabel "cloud-%" any
TAPEPOOL "$r"
RUNTAPES 1
TAPETYPE "S3"
#DUMP-SELECTION ALL FULL
}
I am not able to label a tape, amcheck
simply never finishes. Tried different paths etc, no success.
With the original service account I can sync directories and files to the bucket.
One thought is that my account might lack the permission to create new buckets inside the one "parent bucket". I don't know enough about S3 storage to tell that.
hey,
i noticed issue with code handling fetching access_token , currently testing changes. will let you know if it fixes this issue.
Thanks.
i noticed issue with code handling fetching access_token , currently testing changes. will let you know if it fixes this issue.
Sounds promising, looking forward to any news here.
@prajwaltr93 Seen your commit. I would have to recompile amanda to apply that. Does it already make sense to try that or do you have other changes planned as well?
preparing my patched gentoo-ebuild already
Did a test, no changed behavior so far. Waiting for the upstream admin to check my S3-credentials etc
hey sorry for the delayed response, yeah as specified in the MR description amanda had trouble reading access_token, that got fixed but further request had issues, i thought it was something to do with latest curl library installed on my machine. so was ruling that out. let me see if it fixes that.
No problem with the delay, glad you work on that issue. Yes, somewhere I also read that a downgrade of curl helped with accessing S3 (but I can't quote the exact link now).
turns out google cloud storage does not support HTTP/2 yet, unless configured to use HTTP/2. so added code to use HTTP1.1. now request goes through but returns 400 Bad request, found that content-length header was not accurate, so request was failing with
Tue Mar 28 08:44:28.471761992 2023: pid 31620: thd-0x55ee22d60400: amlabel: data in 1555: <!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 400 (Bad Request)!!1</title>
<style>
*{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
</style>
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>400.</b> <ins>That’s an error.</ins>
<p>Your client has issued a malformed or illegal request. <ins>That’s all we know.</ins>
Tue Mar 28 08:44:28.471794354 2023: pid 31620: thd-0x55ee22d60400: amlabel: PUT https://commondatastorage.googleapis.com/gbackupsprajwal1/DailySet1%2Fslot-1special-tapestart failed with 400/None
need to see what is causing this, looks to me that this is not a straight forward fix. need to investigate further.
@prajwaltr93 thanks for investigating further. Sounds like a difficult task. I was pointed at "gcs-fuse" instead: mount bucket via FUSE, use it like a normal storage for vtapes. I might try that also. Still waiting for more feedback of the responsible admin (assigning me the creds etc).
sure, also access_key and secret_key seems to work from my testing, so if you get hands on those, it should do. will be looking into making OAUTH2 work meanwhile.
sure, also access_key and secret_key seems to work from my testing, so if you get hands on those, it should do. will be looking into making OAUTH2 work meanwhile.
I only have the service account key file as mentioned. I don't know if I can generate access_key and secret_key from that. I will research if I find the time.
here the issue with the curl downgrade: https://github.com/zmanda/amanda/issues/213#issuecomment-1491537723
hi,
we will be actively working on this in coming weeks as we have allocated time for this fix, will merge and notify when that happens.
hope this helps, Thanks.
Great to hear, looking forward to a fix and a working setup.
@prajwaltr93 any progress already?
@stefangweichinger hey, sorry for the delay. i was occupied with some other priority tasks. picked it up today again, built package from the draft PR branch. and with following configuration,
bash-4.2$ cat amanda.conf
org "MyConfig"
infofile "/amanda/state/curinfo"
logdir "/amanda/state/log"
indexdir "/amanda/state/index"
dumpuser "amandabackup"
device_property "S3_HOST" "commondatastorage.googleapis.com" # Your S3 Access Key
device_property "CLIENT_ID" "<client-id>"
device_property "CLIENT_SECRET" "<client-secret>"
device_property "REFRESH_TOKEN" "<refresh_token>"
device_property "S3_SSL" "yes" # Curl needs to have S3 Certification Authority (Verisign today)
device_property "verbose" "yes" # Curl needs to have S3 Certification Authority (Verisign today)
# in its CA list. If connection fails, try setting this no NO
tpchanger "chg-multi:s3:<bucket_name>/DailySet1/slot-{01,02,03,04,05,06,07,08,09,10}" # Number of tapes in your "tapecycle"
device_property "STORAGE_API" "OAUTH2" # Your S3 Access Key
device_property "PROJECT_ID" "zmanda"
changerfile "s3-statefile" # Amanda will create this file
tapetype S3
define tapetype S3 {
comment "S3 Bucket"
length 10240 gigabytes # Bucket size 10TB
}
tapetype "TEST-TAPE"
define tapetype TEST-TAPE {
length 100 mbytes
filemark 4 kbytes
}
define dumptype simple-gnutar-local {
auth "local"
compress none
program "GNUTAR"
}
i was able to take backup !
bash-4.2$ amreport MyConfig
'/etc/amanda/MyConfig/amanda.conf', line 19: warning: Global changerfile is deprecated, it must be set in the changer section
Hostname: centos7
Org : MyConfig
Config : MyConfig
Date : May 15, 2023
These dumps were to tape MyConfig-001.
The next tape Amanda expects to use is: 1 new tape.
FAILURE DUMP SUMMARY:
amreport: ERROR '/etc/amanda/MyConfig/amanda.conf', line 19: warning: Global changerfile is deprecated, it must be set in the changer section
amtrmidx: ERROR '/etc/amanda/MyConfig/amanda.conf', line 19: warning: Global changerfile is deprecated, it must be set in the changer section
amtrmlog: ERROR '/etc/amanda/MyConfig/amanda.conf', line 19: warning: Global changerfile is deprecated, it must be set in the changer section
STATISTICS:
Total Full Incr. Level:#
-------- -------- -------- --------
Estimate Time (hrs:min) 0:00
Run Time (hrs:min) 0:00
Dump Time (hrs:min) 0:00 0:00 0:00
Output Size (meg) 9.5 9.5 0.0
Original Size (meg) 9.5 9.5 0.0
Avg Compressed Size (%) 100.0 100.0 --
DLEs Dumped 1 1 0
Avg Dump Rate (k/s) 3405.4 3405.4 --
Tape Time (hrs:min) 0:00 0:00 0:00
Tape Size (meg) 9.5 9.5 0.0
Tape Used (%) 9.5 9.5 0.0
DLEs Taped 1 1 0
Parts Taped 1 1 0
Avg Tp Write Rate (k/s) 488.5 488.5 --
USAGE BY TAPE:
Label Time Size % DLEs Parts
MyConfig-001 0:00 9770K 9.5 1 1
NOTES:
planner: Adding new disk localhost:/home/prajwal/source.
taper: Slot 1 with label MyConfig-001 is usable
taper: tape MyConfig-001 Barcode kb 9770 fm 1 [OK]
DUMP SUMMARY:
DUMPER STATS TAPER STATS
HOSTNAME DISK L ORIG-KB OUT-KB COMP% MMM:SS KB/s MMM:SS KB/s
----------------------------------- ---------------------- -------------- -------------
localhost /home/prajwal/source 0 9770 9770 -- 0:03 3404.3 0:20 488.5
(brought to you by Amanda version 3.5.2.git.44ec52f8)
logs suggest that amanda
was able to fetch new token from temporary credentials.
Mon May 15 21:06:53.538870545 2023: pid 77684: thd-0x15a8c00: taper: Hdr Out: POST /o/oauth2/token HTTP/1.1^M
Mon May 15 21:06:54.325409170 2023: pid 77684: thd-0x15a8c00: taper: data in 1453: {
"access_token": "<access_token>",
"expires_in": 3599,
"scope": "openid https://www.googleapis.com/auth/sqlservice.login https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/accounts.reauth https://www.googleapis.com/auth/cloud-platform",
"token_type": "Bearer",
"id_token": "<id_token>"
}
can you build and patch your test system, and try this out ? meanwhile i can look into preparing the PR.
Thanks.
PS: Operating System : Centos7
@prajwaltr93 oh, wow. Interesting. I'd have to build amanda on gentoo, that might be difficult. Are there some basic instructions how to build the latest code (order of autoconf etc ...)? A second possibility might be on debian, I have 2-3 servers there and just prepare a fresh debian installation on one of them.
@prajwaltr93 trying to build on gentoo ... as far as I understand I'd have to somehow merge https://github.com/zmanda/amanda/pull/216/commits and build from there? Currently I'd have to add your repo as remote, correct?
@prajwaltr93 trying to build on gentoo ... as far as I understand I'd have to somehow merge https://github.com/zmanda/amanda/pull/216/commits and build from there? Currently I'd have to add your repo as remote, correct?
yes, after addition you can simply checkout the branch and start build from there.
not sure about Gentoo, but for debain there should be a folder called debian under packaging, you should be able to install dependencies using the control
file, and get a .deb installer by running ./packaging/deb/buildpkg
, ./packaging/deb/buildpkg server
if only server binaries are required.
you will have to run ./autogen first as a pre-requisite before you can start building. above steps if you want to skip the ./configure script and skip the hassle of selecting all the options.
if that's not the case, following should suffice
./autogen
./configure # with options
make
make install
thanks, on my way.
I take the configure-options from the gentoo-ebuild .. (I once was proxy maintainer for that).
I now have:
amadmin abt version
build: VERSION="Amanda-3.5.3.git.47fe7d60"
yay!
But:
$ amtape abt inventory -o storage=cloud
slot 1: blank (current) [While creating new S3 bucket: Access denied.: stefangweichinger@gmail.com does not have storage.objects.list access to the Google Cloud Storage bucket. Permission 'storage.objects.list' denied on resource (or it may not exist). (AccessDenied) (HTTP 403)]
Checking things ...
Maybe I misunderstand the concepts of cloud storage, maybe I don't have sufficient permissions.
maybe I don't have sufficient permissions.
from the error message that looks to be the case.
maybe I don't have sufficient permissions.
from the error message that looks to be the case.
yes, mailed to one of the responsible guys to check that. Aside from that I will test 3.5.3 with my other setup on that server.
btw, I don't have "S3_ACCESS_KEY" and "S3_SECRET_KEY" in my conf.
my bad, when pasting the comment here, i copied it from a template. forgot to remove it
made sure that that is the case, re-ran backup with only credential on updated comment above and it is working fine.
thanks for re-testing, yes, I assumed it was just a copy-paste-error. So I have to wait for feedback from that admin. Thanks so far.
It seems that I somehow mixed up the service key and my google-account (remember that "gcloud auth application-default login" ?). The responsible admin mentioned some "hmac key", I am waiting for his response now.
I have that HMAC Key now (associated with the Service Account). It looks like:
Access ID: GOOG1xxx Secret: u9F94xxx
Tried to set this as
device-property "S3_ACCESS_KEY" "xxx"
device-property "S3_SECRET_KEY" "xxx"
That doesn't work. Do I have to use it as CLIENT_ID and CLIENT_SECRET? That gives me errors trying to create buckets ("slot 4: While creating new S3 bucket: Unknown S3 error (None) (HTTP 401)").
I think I am close. Pls advise.
It seems I have the authentication right now, amanda scans the inventory etc
amlabel
doesn't succeed ... or at least it takes a long time.
While it waits(?) I see in the logs:
Di Mai 30 08:53:29.256918212 2023: pid 1317907: thd-0x561deec3cc00: amlabel: Hdr In: HTTP/2 400
Di Mai 30 08:53:29.256951337 2023: pid 1317907: thd-0x561deec3cc00: amlabel: Hdr In: content-type: text/html; charset=UTF-8
Di Mai 30 08:53:29.257014430 2023: pid 1317907: thd-0x561deec3cc00: amlabel: Hdr In: referrer-policy: no-referrer
Di Mai 30 08:53:29.257037505 2023: pid 1317907: thd-0x561deec3cc00: amlabel: Hdr In: content-length: 1555
Di Mai 30 08:53:29.257058430 2023: pid 1317907: thd-0x561deec3cc00: amlabel: Hdr In: date: Tue, 30 May 2023 06:53:29 GMT
Di Mai 30 08:53:29.257104150 2023: pid 1317907: thd-0x561deec3cc00: amlabel: Time Offset (remote - local) :0
Di Mai 30 08:53:29.257121976 2023: pid 1317907: thd-0x561deec3cc00: amlabel: Hdr In:
Di Mai 30 08:53:29.257143872 2023: pid 1317907: thd-0x561deec3cc00: amlabel: HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
You mentioned something around http2, maybe I miss some additional patch(es)?
Additional question:
with
device_property "STORAGE_API" "OAUTH2"
I get errors, oauth client not found. I assume I have some missing package/binary. This is a gentoo system, do I need perl libraries for that or ...
sure, also access_key and secret_key seems to work from my testing, so if you get hands on those, it should do. will be looking into making OAUTH2 work meanwhile.
Ah, so this might still be an issue (in my current patched version).
I have so many things I don't know that this is HARD to fix.
sorry to flooding this, I just want to get on with this.
I tried to use my HMAC-Key for "S3_ACCESS_KEY" and "S3_SECRET_KEY", with
# device_property "STORAGE_API" "OAUTH2"
# disabled by commenting out
Gives me:
amlabel: Device s3:mybuck/vtapes/slot_1 error = 'While writing amanda header: CURL error: HTTP/2 stream 7 was not closed cleanly: PROTOCOL_ERROR (err 1) (None) (CURLcode 92)'
Mi Mai 31 09:39:38.123570744 2023: pid 1343389: thd-0x564db430fc00: amlabel: Device s3:mybuck/vtapes/slot_1 setting status flag(s): DEVICE_STATUS_DEVICE_ERROR, and DEVICE_STATUS_VOLUME_ERROR
Mi Mai 31 09:39:38.123641235 2023: pid 1343389: thd-0x564db430fc00: amlabel: /usr/lib64/perl5/vendor_perl/5.36/Amanda/Label.pm:796:error:1000028 Error writing label: While writing amanda header: CURL error: HTTP/2 stream 7 was not closed cleanly: PROTOCOL_ERROR (err 1) (None) (CURLcode 92).
Again: maybe my amanda does not yet have all your patches?
VERSION="Amanda-3.5.3.git.47fe7d60"
hey,
so since you have access and secret keys, you can use a configuration like below:
device_property visible "S3_ACCESS_KEY" "<access_key>"
device_property visible "S3_SECRET_KEY" "<secret_key>"
device_property visible "STORAGE_API" "AWS4" # please make sure to set this.
hope this works for you,
ps: you don't need the patched version from this branch for the above configuration to work.
this document is for AWS but it should work for you as well https://wiki.zmanda.com/index.php/How_To:Backup_to_Amazon_S3
you will have to specify following property as well
device_property "S3_HOST" "commondatastorage.googleapis.com"
just a sample configuration i used to test is specified below
device_property "S3_ACCESS_KEY" "GOOG<rest_of_the_access_key>" # Your S3 Access Key
device_property visible "S3_BUCKET_LOCATION" "us-east-1" # defaults to us-east-1
device_property "CREATE-BUCKET" "on" # unless you haven't created manually via console,
device_property "S3_SECRET_KEY" "<secret_key>" # Your S3 Secret Key
device_property "S3_SSL" "YES" # Curl needs to have S3 Certification Authority (Verisign today)
device_property "S3_HOST" "commondatastorage.googleapis.com" # mandatory
device_property "STORAGE_API" "AWS4"
tpchanger "chg-multi:s3:<bucket_name>/slot-{01,02,03,04,05,06,07,08,09,10}" # Number of tapes in your "tapecycle"
changerfile "s3-statefile" # Amanda will create this file
tapetype S3
define tapetype S3 {
comment "S3 Bucket"
length 10240 gigabytes # Bucket size 10TB
}
on running amlabel
bash-4.2$ amlabel test_gcloud test_gcloud-1 slot 1
'/etc/amanda/test_gcloud/amanda.conf', line 9: warning: Global changerfile is deprecated, it must be set in the changer section
Reading label...
Found an empty tape.
Writing label 'test_gcloud-1'...
Checking label...
Success!
bash-4.2$
hey,
so since you have access and secret keys, you can use a configuration like below:
device_property visible "S3_ACCESS_KEY" "<access_key>" device_property visible "S3_SECRET_KEY" "<secret_key>" device_property visible "STORAGE_API" "AWS4" # please make sure to set this.
Why "AWS4", when it's Google Cloud Storage in my case?
Well, I have
Access ID: GOOG1... Secret: ...
and not exactly Access Key and Secret Key.
hope this works for you,
Unfortunately not yet. I contacted their admin to recheck all my keys/secrets/ids.
ps: you don't need the patched version from this branch for the above configuration to work.
OK
this document is for AWS but it should work for you as well https://wiki.zmanda.com/index.php/How_To:Backup_to_Amazon_S3
you will have to specify following property as well
device_property "S3_HOST" "commondatastorage.googleapis.com"
I have that one.
Current error:
slot 7: Can't read label: While trying to read tapestart header: The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method. (SignatureDoesNotMatch) (HTTP 403)
It all boils down to having the right credentials, I assume. Thanks.
Why "AWS4", when it's Google Cloud Storage in my case?
AWS4 here stands for AWS Signature Version 4
is a standard for authentication, its common across major cloud storage providers.
Can't read label: While trying to read tapestart header:
could this be due to use of latest curl library ? reverting back might help
Is there any documentation or howto for using a Google Cloud Storage bucket? I was once told that this is possible but never figured out the actual config.
Does anyone use that?