Russell-IO / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

ls: reading directory .: Input/output error #47

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. I have put credential in /etc/passwd-s3fs
2. mkdir /mnt/mybucket

What is the expected output? What do you see instead?

{Make}

[root@vmx1 s3fs]# make install
g++ -ggdb -Wall -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse  -pthread -lfuse
-lrt -ldl    -L/usr/kerberos/lib -lcurl -lgssapi_krb5 -lkrb5 -lk5crypto
-lcom_err -lresolv -ldl -lidn -lssl -lcrypto -lz   -I/usr/include/libxml2
-L/usr/lib -lxml2 -lz -lm -lcrypto s3fs.cpp -o s3fs
s3fs.cpp:440: warning: âsize_t readCallback(void*, size_t, size_t, void*)â
defined but not used
ok!
cp -f s3fs /usr/bin

{Mount}

[root@vmx1]# /usr/bin/s3fs mybucket /mnt/mybucket [OK]

{LS}

[root@vmx1]# cd /mnt/mybucket
[root@vmx1]# ls
ls: reading directory .: Input/output error

What version of the product are you using? On what operating system?
OS : CentOS 5.2, Fuse 2.7, s3fs r177

Please provide any additional information below.
I have recheck my access id and secret key, triple time. S3fox is able to
see the bucket and the bucket EXISTS.

Original issue reported on code.google.com by mohd%kha...@gtempaccount.com on 20 Jan 2009 at 8:07

GoogleCodeExporter commented 9 years ago
try using a unique bucketName rather than "mybucket", e.g., "khalemi-mybucket"

bucket names are shared in a global namespace for all aws users, "mybucket" is 
most
likely already taken

Original comment by rri...@gmail.com on 20 Jan 2009 at 10:37

GoogleCodeExporter commented 9 years ago
mybucket is for example only.

i put hwcpmove as the bucket. the bucket exist

http://hwcpmove.s3.amazonaws.com/

ihave uploaded some images vis s3fox. I cannot mount the bucket, ls, dir give 
IO errors.

Original comment by mohd%kha...@gtempaccount.com on 21 Jan 2009 at 12:00

GoogleCodeExporter commented 9 years ago
computer clock accurate ?

Original comment by rri...@gmail.com on 21 Jan 2009 at 1:43

GoogleCodeExporter commented 9 years ago
Tue Jan 20 21:38:41 EST 2009

that is my vps current date/time.

Original comment by mohd%kha...@gtempaccount.com on 21 Jan 2009 at 1:47

GoogleCodeExporter commented 9 years ago
looks to be ahead by approx 50 min?!?

needs to be within 15 min of amazon's s3 servers

Original comment by rri...@gmail.com on 21 Jan 2009 at 1:51

GoogleCodeExporter commented 9 years ago
Ok, how i sync with amazon s3 time?
What is amazon s3 time currently?

I'm in malaysia, and that's is +8 hours while my vps in US.
Why am i able to put something in the bucket via s3fox in malaysia?

Thanks

Original comment by mohd%kha...@gtempaccount.com on 21 Jan 2009 at 1:55

GoogleCodeExporter commented 9 years ago
I was having this problem as well.  The issue seems to be with s3fs returning 0 
as
the st_mode when the file has no mode set via the amz-meta headers and when the
content-type is blank.  After applying the attached patch file to the r177 
release
and recompiling everything worked without error.

Original comment by mitchell...@gmail.com on 20 Jul 2009 at 3:45

Attachments:

GoogleCodeExporter commented 9 years ago
Hi,

I'm having a same problem . How do i apply  the s3fs-mode.patch ? 

i've tried patch -p1 < s3fs-mode.patch  , but failed.

can't find file to patch at input line 1
Perhaps you used the wrong -p or --strip option?
File to patch:
Skip this patch? [y]
Skipping patch.
2 out of 2 hunks ignored

Pls help as i stuck at this stage right now

Original comment by han...@gmail.com on 16 Aug 2009 at 2:36

GoogleCodeExporter commented 9 years ago
any updates on the matter ? plssss

Original comment by han...@gmail.com on 21 Aug 2009 at 4:03

GoogleCodeExporter commented 9 years ago
I am having similar issues.  When I try to apply the patch I get

[root@dtest ~]# patch -pl < s3fs-mode.patch 
patch: **** strip count l is not a number
[root@dtest ~]# 

When I try to use s3fs I get

[root@dtest ~]# /usr/bin/s3fs vhptest /s3
[root@dtest ~]# cd /s3
[root@dtest s3]# ls
ls: reading directory .: Input/output error
[root@dtest s3]# 

Original comment by edwardig...@gmail.com on 16 Nov 2009 at 3:45

GoogleCodeExporter commented 9 years ago
[deleted comment]
GoogleCodeExporter commented 9 years ago
Try this one.

Original comment by mitchell...@gmail.com on 19 Nov 2009 at 6:57

Attachments:

GoogleCodeExporter commented 9 years ago
Here is Mitchell's patch ported to R191.  I am still getting the same error 
though.  Perhaps mine is to do with issue #55.  I get the error only with URL 
set to https.

Original comment by ratnawee...@gmail.com on 3 Jul 2010 at 1:48

Attachments:

GoogleCodeExporter commented 9 years ago
this issue is very annoying indeed.

I am willing to admit I am being a fool if that is the case (likley).

when I try to apply the patch I get:
[root@node1 ~]#patch -p1 < s3fs-mode.patch
patch: **** Only garbage was found in the patch input.

also its worth mentioning for the guy a few posts up its p1(one) not pl (letter 
l)

My clock is set correct and within 15 minutes of amazon

my bucket name is unique

yet I still get input/output error

Any Advice?

Original comment by backtog...@gmail.com on 7 Jul 2010 at 6:33

GoogleCodeExporter commented 9 years ago
This issue affects me as well (Ubuntu 9.10 on EC2).  Error only occurs if 
"url=url=https://s3.amazonaws.com" is used in fstab.

Original comment by seanca...@gmail.com on 13 Jul 2010 at 4:22

GoogleCodeExporter commented 9 years ago
Ok, looks like a patch for this bug has been made by f3rrix in issue 85:
http://code.google.com/p/s3fs/issues/detail?id=85

You can download his patch in that thread.  If you don't know how to patch 
s3fs, you can manually fix this problem by:

download s3fs

untar the file

edit s3fs.cpp

go to line 289

erase this line:
url_str = url_str.substr(0,7) + bucket + "." + url_str.substr(7,bucket_pos - 7) 
 + url_str.substr((bucket_pos + bucket_size));

add these lines:
int clipBy = 7;
if(!strncasecmp(url_str.c_str(), "https://", 8))
       {
               clipBy = 8;
       }
       url_str = url_str.substr(0,clipBy) + bucket + "." + url_str.substr(clipBy,bucket_pos - clipBy)  + url_str.substr((bucket_pos + bucket_size));

Save and exit the file.

make install

And presto!  Your S3 drive will now work when you mount it by https!

Major thanks to f3rrix for figuring this one out!

Original comment by seanca...@gmail.com on 13 Jul 2010 at 4:41

GoogleCodeExporter commented 9 years ago
[deleted comment]
GoogleCodeExporter commented 9 years ago
Can't get https to work at all on an updated CentOS 5.5 system.  On an updated 
F13 system, only r177 works with https.  r191 with or without the patch doesn't 
work with https.

This is on an x86_64 system.

Original comment by goo...@datadoit.com on 28 Sep 2010 at 7:37

GoogleCodeExporter commented 9 years ago
Output from makefile on updated CentOS 5.5 x86_64 system:

[root@localhost s3fs]# make -f Makefile
g++ -ggdb -Wall -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse  -pthread -L/lib64 
-lfuse -lrt -ldl    -L/usr/kerberos/lib64 -lcurl -ldl -lgssapi_krb5 -lkrb5 
-lk5crypto -lcom_err -lidn -lssl -lcrypto -lz   -I/usr/include/libxml2 -lxml2 
-lz -lm -lcrypto s3fs.cpp -o s3fs
s3fs.cpp: In function âstd::string calc_signature(std::string, std::string, 
std::string, curl_slist*, std::string)â:
s3fs.cpp:453: warning: value computed is not used
s3fs.cpp: In function âint put_local_fd(const char*, headers_t, int)â:
s3fs.cpp:794: warning: format â%lluâ expects type âlong long unsigned intâ, 
but argument 4 has type â__off_tâ
s3fs.cpp: In function âint s3fs_readlink(const char*, char*, size_t)â:
s3fs.cpp:892: warning: comparison between signed and unsigned integer 
expressions
s3fs.cpp: At global scope:
s3fs.cpp:467: warning: âsize_t readCallback(void*, size_t, size_t, void*)â 
defined but not used
ok!

Note there is no "/usr/kerberos/lib64" file available.  Only:
/usr/share/doc/oddjob-0.27/sample/usr/lib64
/usr/lib64
/usr/local/lib64
/lib64

Created a symbolic link from /usr/kerberos/lib64 to the /lib64 directory, then 
remade s3fs again, but no luck.

Original comment by goo...@datadoit.com on 28 Sep 2010 at 7:51

GoogleCodeExporter commented 9 years ago
There have been lots of changes and improvements since this issue was first 
reported. Please try the latest code. If the issue is still present, please 
provide very detailed instructions on how to reproduce the issue.  Thank you.

Original comment by dmoore4...@gmail.com on 5 Feb 2011 at 1:52

GoogleCodeExporter commented 9 years ago
Looking through the source code, quite a lot of things can result in the 
Input/Output error.  In my case, I was using default_acl="public-read" in my 
fstab entry, and the quotes were causing problems when I tried to write a file 
(reading files worked fine).  Removing the quotes fixed my problem.

For those out there trying to get this working, the general debugging technique 
is either tcpdump or strace:
tcpdump -s 1500 -A port 80
strace -vf -o |less /usr/bin/s3fs ...rest-of-command...

Ian Charnas

Original comment by ian.char...@gmail.com on 17 Feb 2011 at 1:32

GoogleCodeExporter commented 9 years ago
Sure, as with almost any code there are potential issues lurking.  If you spot 
a specific issue that has a good chance for the developers to recreate, please 
open a new issue and we'll look into it.  General, non specific issues lacking 
sufficient detail are difficult to address.

Since the original reporter of this issue has not responded, I'll assume that 
either the original issue has been resolved by newer revisions, or that there's 
no longer interest.

Closing this old issue as the original issue reported is assumed to be fixed.

Ian, please feel free to open a new issue regarding the quotes on options -- 
maybe you can help resolve it?

Original comment by dmoore4...@gmail.com on 26 Feb 2011 at 6:37

GoogleCodeExporter commented 9 years ago
this is not fixed, i have the latest code installed and still have input/output 
errors!

standard ubuntu 64 lucid - doesnt do what it says on the tin!

Original comment by philly%u...@gtempaccount.com on 21 Mar 2011 at 12:20

GoogleCodeExporter commented 9 years ago
Please open a new issue specific to the behavior that you are seeing.

Original comment by dmoore4...@gmail.com on 21 Mar 2011 at 4:59