guyson / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

folder showing up as a file instead of folder on unix system #381

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
Detailed description of support request:

I have an s3 bucket, lets just say it is called (hb-s3-bucket) that has a 
folder (filepicker) with images in there.  In the s3 console and S3Fox 
Organizer the folder shows up as a folder.  But when I mount the bucket, the 
folder shows up as a file.  Other folders in the bucket show up as folders, not 
sure why it is only this particular one.  

I even tried mounting on a clean server, but still shows up as file not a 
directory.  This was working just fine until people started reporting 500 
errors on my site.  That is how I found that the folder was somehow changed to 
a file not a directory.

I did a support chat with amazon and they said that the folder in the bucket 
was a key and they could see objects in there, so thought it was just an s3fs 
issue.

===================================================================
The following information is very important in order to help us to help
you.  Omission of the following details may delay your support request or
receive no attention at all.
===================================================================
Version of s3fs being used (s3fs --version): 1.73

Version of fuse being used (pkg-config --modversion fuse): 2.9.2

System information (uname -a): Linux ip-xx-xxx-xx-xxx 3.4.57-48.42.amzn1.x86_64 
#1 SMP Mon Aug 12 21:43:36 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Distro (cat /etc/issue): Amazon Linux AMI release 2013.03

s3fs command line used (if applicable):
s3fs hb-s3-bucket /media/hb-s3-bucket -o 
rw,allow_other,uid=48,gid=48,umask=0002,use_cache=/s3_cache/hb-s3-bucket,default
_acl=public-read,dev,suid

/etc/fstab entry (if applicable):

s3fs syslog messages (grep s3fs /var/log/syslog):

Oct 24 13:31:21 s3fs: file locked(/hb-s3-bucket- 
/s3_cache/.hb-s3-bucket.stat/filepicker)
Oct 24 13:31:21 s3fs: file unlocked(/filepicker)
Oct 24 13:31:21 s3fs: Body Text:
Oct 24 13:31:21 s3fs: could not download. start(0), size(16384), errno(-2)
Oct 24 13:31:21 s3fs: failed to read file(/filepicker). result=-5

Original issue reported on code.google.com by hbakke...@gmail.com on 24 Oct 2013 at 4:44

GoogleCodeExporter commented 9 years ago
When I ran with debug, the logs say this(I changed the bucket name in this 
ticket from the actual bucket name.)

Oct 24 13:31:21 s3fs: file locked(/filepicker- 
/s3_cache/.hb-s3-bucket.stat/filepicker)
Oct 24 13:31:21 s3fs: file unlocked(/filepicker)
Oct 24 13:31:21 s3fs: Body Text:
Oct 24 13:31:21 s3fs: could not download. start(0), size(16384), errno(-2)
Oct 24 13:31:21 s3fs: failed to read file(/filepicker). result=-5

Original comment by hbakke...@gmail.com on 24 Oct 2013 at 4:47

GoogleCodeExporter commented 9 years ago
Hi,

Which console or application did make the folder(filepicker)?
I think that s3fs con not understand your folder as directory.

if you can, please let me know about your folder status.
* s3cmd ls s3://hb-s3-bucket
  Which does the filepicker folder show "file object" or "DIR" object?
* s3cmd info s3://hb-s3-bucket/filepicker
  Is the stats showed?
* s3cmd ls s3://hb-s3-bucket/filepicker/
  Is the stats showed?

s3fs can understand the object as directory by below cases.
* the directory object exists and its mime type is "octet-stream" or 
"x-directory"
* the directory object does not exists, but the object exists under object path.
* etc

So I want to know whichever your directory object exists and what 
status(attributes) does the object have.

Thanks in advance for your help.

Original comment by ggta...@gmail.com on 28 Oct 2013 at 9:50

GoogleCodeExporter commented 9 years ago
I am seeing the same thing. Here is an answer to your question from my POV.

Note that in this case, this is where Amazon is creating CloudTrail logs in 
DIR/SUBDIR1/NNNN where NNNNNNNNNNNN is our AWS account ID. I can view the 
directories in s3 browser and from the amazon console but s3fs just shows 
(Though it does allow me to descend into SUBDIR1)

# ls -l
total 1
---------- 1 root root 0 Feb 25  2014 NNNNNNNNNNNN

# s3cmd info s3://DIR/
s3://DIR/ (bucket):
   Location:  any
   ACL:       PPPP: READ
   ACL:       PPPP: WRITE
   ACL:       PPPP: READ_ACP
   ACL:       PPPP: WRITE_ACP

# s3cmd info s3://DIR/SUBDIR1/

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    An unexpected error has occurred.
  Please report the following lines to:
   s3tools-bugs@lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Problem: ParseError: no element found: line 1, column 0
S3cmd:   1.0.0

Traceback (most recent call last):
  File "/usr/bin/s3cmd", line 2006, in <module>
    main()
  File "/usr/bin/s3cmd", line 1950, in main
    cmd_func(args)
  File "/usr/bin/s3cmd", line 631, in cmd_info
    info = s3.object_info(uri)
  File "/usr/share/s3cmd/S3/S3.py", line 324, in object_info
    response = self.send_request(request)
  File "/usr/share/s3cmd/S3/S3.py", line 511, in send_request
    raise S3Error(response)
  File "/usr/share/s3cmd/S3/Exceptions.py", line 48, in __init__
    tree = getTreeFromXml(response["data"])
  File "/usr/share/s3cmd/S3/Utils.py", line 66, in getTreeFromXml
    tree = ET.fromstring(xml)
  File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1302, in XML
    return parser.close()
  File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1655, in close
    self._raiseerror(v)
  File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1507, in _raiseerror
    raise err
ParseError: no element found: line 1, column 0

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    An unexpected error has occurred.
    Please report the above lines to:
   s3tools-bugs@lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

# s3cmd info s3://DIR/SUBDIR1/NNNNNNNNNNNN/
s3://DIR/SUBDIR1/NNNNNNNNNNNN/ (object):
   File size: 0
   Last mod:  Tue, 25 Feb 2014 14:25:28 GMT
   MIME type: text/plain
   MD5 sum:   d41d8cd98f00b204e9800998ecf8427e
   ACL:       aws_cloudtrail_us-east-1: FULL_CONTROL
   ACL:       PPPP: FULL_CONTROL

# s3cmd ls s3://DIR/SUBDIR1/NNNNNNNNNNNN/
                       DIR   s3://DIR/SUBDIR1/NNNNNNNNNNNN/CloudTrail/
2014-02-25 14:25         0   s3://DIR/SUBDIR1/NNNNNNNNNNNN/

And I can continue to traverse subdirs with s3cmd. Just for completeness...

# s3fs --version
Amazon Simple Storage Service File System 1.74

Original comment by psir...@gmail.com on 23 Jan 2015 at 7:22

GoogleCodeExporter commented 9 years ago
So it's looking like the issue is that there is not s3fs metadata on the 
files/directories in question.

It looks very similar to this issue
https://code.google.com/p/s3fs/issues/detail?id=73
However, that is marked as fixed and this is still an issue.

It seems like the answer would be to have an option for a default mode if one 
is not obtained from the metadata. e.g. -o mode=0500. I would suggest that the 
default remain as 0000

As well as the permissions, I note that the directory was not detected as a 
directory. I'm not sure what the deal is there.

Original comment by psir...@gmail.com on 23 Jan 2015 at 10:08