JackYeh / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

response 403 and input output error #43

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. "s3fs -o default_acl=public-read bucketName /mnt/s3 -o use_cache=/tmp"
and I have triple checked the access info in /etc/passwd-s3fs and connected
with another s3 app

2. no error when I run but "tail -f /var/log/messages" shows... 
s3fs: ###response=403
s3fs: init $Rev: 177 $

3. if I try "ls /mnt/s3/" I get...
ls: reading directory /mnt/s3: Input/output error

What is the expected output? What do you see instead?
I have couple files in the bucket which I can view using Cockpit online but
can't get past the input/output error with s3fs.

What version of the product are you using? On what operating system?
s3fs v177, fedora c3, curl-7.19.0

Please provide any additional information below.

Original issue reported on code.google.com by huso...@gmail.com on 8 Oct 2008 at 10:00

GoogleCodeExporter commented 8 years ago
Hi- have this been resolved? if not, try using ethereal to capture the amazon s3
"error document" returned in the 403 response, that should shed some light on 
exactly
why its returning 403

Original comment by rri...@gmail.com on 15 Oct 2008 at 6:07

GoogleCodeExporter commented 8 years ago
I had a 403 error as a result of an incorrect access key.  Double check that 
your
keys are correct!

Original comment by bradley....@gmail.com on 25 Oct 2008 at 6:02

GoogleCodeExporter commented 8 years ago
yup- also, be sure local machine time is accurate to within 15 minutes of 
amazon's
servers! (TimeTooSkewed)

Original comment by rri...@gmail.com on 25 Oct 2008 at 6:19

GoogleCodeExporter commented 8 years ago
[deleted comment]
GoogleCodeExporter commented 8 years ago
Finally, back to work on this issue. I used ethereal to view the error from 
Amazon
and it is giving me "SignatureDoesNotMatch" but I double-triple checked and it 
is
correct. I created a passwd-s3fs file with  AccessID:SigID with no spaces or 
line
breaks. The same info works fine with other methods to connect but doesn't seem 
to
work with s3fs. Any more ideas?

Original comment by huso...@gmail.com on 2 Dec 2008 at 1:15

GoogleCodeExporter commented 8 years ago
be sure the bucket you're specifying via s3fs already exists (s3fs does not 
create
buckets)

Original comment by rri...@gmail.com on 2 Dec 2008 at 2:49

GoogleCodeExporter commented 8 years ago
great suggestion on TimeTooSkewed...that fixed mine!

Original comment by webmona...@gmail.com on 25 Aug 2010 at 5:51

GoogleCodeExporter commented 8 years ago
Closing this old issue. This either had something to do with time misalignment 
or using and invalid bucket name.

Original comment by dmoore4...@gmail.com on 29 Oct 2010 at 4:42