google-code-export / s3ql

Automatically exported from code.google.com/p/s3ql
0 stars 0 forks source link

Add support for swift keystone authentication to support multiple data centers #398

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
The bucket, s3qlmonitoring, exists and I uploaded a test file to it.

mkfs.s3ql --cachedir /S3ql-cache 
swift://auth.api.rackspacecloud.com/s3qlmonitoring --debug all
_do_request(): start with parameters ('GET', '/', None, {'limit': 1}, None, 
None)
_do_request(): no active connection, calling _get_conn()
_get_conn(): start
Connecting to auth.api.rackspacecloud.com...
_get_conn(): GET /v1.0
Connecting to storage101.ord1.clouddrive.com...
_do_request(): GET 
/v1/MossoCloudFS_64e7a87d-65c6-456c-bbe1-2351260ccc48/s3qlmonitoring/?limit=1
_do_request(): Reading response..
's3qlmonitoring' does not exist

Original issue reported on code.google.com by franci...@natserv.net on 6 May 2013 at 4:07

GoogleCodeExporter commented 9 years ago
When I ran the above I noticed:
>>storage101.ord1.clouddrive.com

I was creating the bucket in DFW. Tried creating the bucket in ORD and it 
worked.

The VM is in DFW so having a bucket created in a different data center may be 
slower and likely pay bandwith on both ends (the VM and the Cloud files).

Original comment by franci...@natserv.net on 6 May 2013 at 4:14

GoogleCodeExporter commented 9 years ago
That's a problem on Rackspace's end then. The storage101.ord1.clouddrive.com 
comes from Rackspace's auth server. Can you get in touch with them?

Original comment by Nikolaus@rath.org on 6 May 2013 at 4:32

GoogleCodeExporter commented 9 years ago
Done. Will let you know what they answer.

Original comment by franci...@natserv.net on 6 May 2013 at 4:34

GoogleCodeExporter commented 9 years ago
Got a reply from Rackspace.
-------------
Looking at your account, it looks like ORD is your accounts default data 
center, so if the  s3ql author is using the old v1 of our api, then it will 
only show one data center with those calls. We can switch that to make DFW your 
default provisioning point if you like, but we do see alot of programs not 
using the newer api version, or not designed to offer an option to select the 
data center.

Original comment by franci...@natserv.net on 6 May 2013 at 4:47

GoogleCodeExporter commented 9 years ago

Original comment by Nikolaus@rath.org on 6 May 2013 at 5:26

GoogleCodeExporter commented 9 years ago
I was looking into this, but the RackSpace API documentation at 
http://docs.rackspace.com/files/api/v1/cf-devguide/content/Authentication-d1e639
.html doesn't say anything about a newer API. Do you have link with 
documentation for the newer API?

Original comment by Nikolaus@rath.org on 2 Jun 2013 at 3:40

GoogleCodeExporter commented 9 years ago
How about this:
http://docs.rackspace.com/auth/api/v2.0/auth-client-devguide/content/Sample_Requ
est_Response-d1e64.html

Entry point for v2.0 api
http://docs.rackspace.com/auth/api/v2.0/auth-client-devguide/content/Overview-d1
e65.html

Original comment by franci...@natserv.net on 2 Jun 2013 at 3:46

GoogleCodeExporter commented 9 years ago
Issue 380 has been merged into this issue.

Original comment by Nikolaus@rath.org on 2 Jun 2013 at 4:29

GoogleCodeExporter commented 9 years ago
I see, thanks! It's weird that the CloudFiles documentation doesn't even 
mention that a different kind of authentication is possible.

Doing keystone auth itself seems rather easy. However, I'm not yet sure how 
this is going to help with automatic datacenter selection. With keystone auth, 
S3QL has the proper server name for every datacenter, but I do not yet see a 
way to determine which datacenter to use for a given container. Having to 
specify that manually would be rather annoying...

Original comment by Nikolaus@rath.org on 2 Jun 2013 at 4:33

GoogleCodeExporter commented 9 years ago
This issue was closed by revision 19e730e10cf1.

Original comment by Nikolaus@rath.org on 7 Jul 2013 at 4:59