andrinux / opendedup

Open Deduplication File System
1 stars 2 forks source link

Does not work with Singapore Bucket #29

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. Create a new bucket in AWS Management console, with Singapore (Asia Pacific 
Region)
2. mkfs.sdfs on new bucket and mount.
3. Copy some test files into mounted folder.

What is the expected output? What do you see instead?
Expect to see no errors. But instead see warnings and fatal IOExceptions 
indicating that http response is a 303 temporary redirect to amazon's asia 
pacific domain name & url.

What version of the product are you using? On what operating system?
version 1.00 on ubuntu 10.04 LTS.

Please provide any additional information below.
Please assist in this. The error seems to be due to outdated 3rd party 
libraries and blatant hardcoding.

Original issue reported on code.google.com by her...@gmail.com on 20 Nov 2010 at 4:02

GoogleCodeExporter commented 9 years ago
Similar errors on US-WEST region:

19:13:30.589 Thread-10  WARN 
[org.jets3t.service.impl.rest.httpclient.RestS3Service]: Error Response: PUT 
'/a188963f3307be196d24f7bc95ff0541dda5b6e61815f9a7' -- ResponseCode: 307, 
ResponseStatus: Temporary Redirect, Request Headers: [x-amz-meta-compress: 
true, Content-Length: 613, Content-Type: binary/octet-stream, 
x-amz-meta-encrypt: false, User-Agent: JetS3t/0.7.4-dev (Linux/2.6.24-11-pve; 
amd64; en; JVM 1.7.0-ea), Host: dedup03.s3.amazonaws.com, Expect: 100-continue, 
Date: Sat, 20 Nov 2010 16:13:30 GMT, Authorization: AWS 
AKIAICMKTVKXOBXRCOOQ:ymQAkKOjqPzV2I2sFvCeRtJwp1c=], Response Headers: 
[x-amz-request-id: 1970D465DA91F674, x-amz-id-2: 
NLUmgBdYCZ27VFAWDKa2UgQOar6rYZzBAMnQ6wNxza1pdDoN7Fc0AiQ0ipOC3auL, Location: 
https://dedup03.s3-us-west-1.amazonaws.com/a188963f3307be196d24f7bc95ff0541dda5b
6e61815f9a7, Content-Type: application/xml, Transfer-Encoding: chunked, Date: 
Sat, 20 Nov 2010 16:13:30 GMT, nnCoection: close, Server: AmazonS3]

Original comment by her...@gmail.com on 20 Nov 2010 at 4:14

GoogleCodeExporter commented 9 years ago
I will work on this for the upcoming release

Original comment by sam.silv...@gmail.com on 6 Dec 2010 at 11:15

GoogleCodeExporter commented 9 years ago
S3 will default to us-east-1 (US_STANDARD) unless a region is specified.  This 
prohibits sdfs from working with any region outside of us-east-1.

An aws-region command line option needs to be added.  This option then needs to 
be passed to the s3 provider.

Also, if sdfs is running on a AWS instance it can (and should) get credentials 
from a IAM Role (Instance Profile) if credentials are not provided on the 
command line.

Something like the following:

....

if (credentials == null) {
       credentials = new DefaultAWSCredentialsProviderChain().getCredentials();
 }

AmazonS3Client s3 = new AmazonS3Client(credentials);
Region awsRegion = Region.getRegion(Regions.fromName(awsRegionName));
s3.setRegion(awsRegion);

...

Original comment by kmcgrath...@gmail.com on 11 Jan 2015 at 6:38