LittleFlower2019 / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

Not seeing directories / files unless created via s3fs #73

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
We have a bucket (let's call it 'littlepig-backup') that shows up fine in
S3Fox, and has under it 'backup' and 'importlogs' directories created by
Amazon's AWS Import / Export service (we shipped them ~150GB of data).  I
can browse it just fine under S3Fox and all the expected data, directory
structure, etc., is there.

When I setup s3fs on two different machines, both of them can mount the bucket:

# s3fs littlepig-backup -o accessKeyId=##REDACTED## -o
secretAccessKey=##REDACTED## /mnt/s3

I can change to /mnt/s3 and create files, delete files, etc., all that
works fine.  Files created from a shell on the Linux box under /mnt/s3 show
up in S3Fox on the Linux box or elsewhere, and are intact.  

However, the data already in the bucket does *not* show up.  Until/unless I
create a file under the s3fs mountpoint, `ls -la` shows no files or
directories, even though they're there in S3Fox.  Creating a file under the
s3fs mountpoint causes it to show up in S3Fox alongside the preexisting
directories / files.

Just ... weird.

Software is (on one machine):

$ rpm -q fuse
fuse-2.7.4-1.el5.rf

$ uname -a
Linux serenity 2.6.18-128.1.16.el5PAE #1 SMP Tue Jun 30 06:45:32 EDT 2009
i686 i686 i386 GNU/Linux

# svn info
Path: .
URL: http://s3fs.googlecode.com/svn/trunk
Repository Root: http://s3fs.googlecode.com/svn
Repository UUID: df820570-a93a-0410-bd06-b72b767a4274
Revision: 185
Node Kind: directory
Schedule: normal
Last Changed Author: rrizun
Last Changed Rev: 177
Last Changed Date: 2008-08-10 15:51:27 -0700 (Sun, 10 Aug 2008)

(on the other):

fuse 2.8.1 (from source)

Linux localhost.localdomain 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT
2009 i686 i686 i386 GNU/Linux

s3fs revision 185

Original issue reported on code.google.com by harsh...@gmail.com on 24 Sep 2009 at 7:03

GoogleCodeExporter commented 8 years ago
Hi- s3 does not have a native concept of folders; it is up to each s3 tool to 
come up
with their own convention for storing folders (if they want to); as such, 
various s3
tools' conventions are not compatible

in this case, s3fs does not understand folders created with s3fox

solution is to use a single tool exclusively against the contents of a bucket, 
i.e.,
only ever use s3fs against a bucket (unless, of course, you know what you're 
doing,
in which case it is perfectly fine to use another s3 tool, e.g., jets3t 
cockpit, to
manipulate the bucket contents, as long as it is done in such a way as to remain
compatible with s3fs if you wish to continue to use s3fs against the bucket)

hope that makes sense!

Original comment by rri...@gmail.com on 24 Sep 2009 at 8:16

GoogleCodeExporter commented 8 years ago
The folders weren't created by S3Fox they were created by Amazon when they did 
the import 
(http://aws.amazon.com/importexport/).  Is there seriously no way to access 
that data using s3fs?  (There's no 
way we're able to upload ~150+ GB of data using s3fs!)  

Would seem like Amazon's mechanism would be a reasonable standard to adopt; I'm 
sure we're not the only 
ones who are going to push a huge data store out via AWS Import/Export and then 
want to use rsync or 
something similar to keep the backup updated over s3fs...

Original comment by harsh...@gmail.com on 24 Sep 2009 at 11:00

GoogleCodeExporter commented 8 years ago
The problem: S3Fox and s3fs both create zero-length files to use as 
"directories",
but use different formats.  (S3 itself uses no directories, but rather key 
names can
contain slashes - I imagine this is the format Amazon Import / Export used, in 
which
there are no explicit directories, directories must simply be inferred from the 
key
names of the individual files.)  S3Fox can understand its own directories, and 
also
seems to understand implied directories present only in uploaded filenames.  
s3fs can
understand only its own directories, not those created by S3Fox or merely 
implied by
key names.  

The solution is, using s3fs, create the directories you should be seeing.  The
contents will appear in them as you create them.  However, for as long as the 
s3fs
directory exists, S3Fox will see an empty file in place of the directory and 
will
lose access to the contents.  Only one of them can see the contents at any time.

If you use s3fs to remove a directory, S3Fox will regain the ability to see its
contents; however, s3fs will only remove an empty directory.  If you don't need 
to
save the contents, this method of removing the s3fs directory can preserve the
timestamps recorded in the S3Fox directory file, if one exists.

If you need S3Fox to see the contents of an s3fs directory, use S3Fox to remove 
the
empty regular file corresponding to the s3fs directory, and the contents will 
appear;
however, 3Fox will also remove its own directory file, destroying any directory
timestamps!  The files will still exist, but in an implied directory structure 
which
S3Fox can follow and s3fs cannot.  To regain s3fs access, simply recreate the
directory structure using s3fs again.

These are my experimental findings using s3fs version r191 and S3Fox 0.4.9 in 
Firefox
3.0.17 on Ubuntu 9.04.

Original comment by ABFur...@gmail.com on 15 Feb 2010 at 3:25

GoogleCodeExporter commented 8 years ago
Issue 81 has been merged into this issue.

Original comment by dmoore4...@gmail.com on 19 Oct 2010 at 2:32

GoogleCodeExporter commented 8 years ago
Issue 94 has been merged into this issue.

Original comment by dmoore4...@gmail.com on 19 Oct 2010 at 2:36

GoogleCodeExporter commented 8 years ago
I modified S3FS to support directories without needing files. The support is 
somewhat quirky, but an improvement over the existing lack of support for 
directories. I uploaded the change to http://www.yikes.com/~bear/s3fs/. I'd be 
happy to help with the merge if something needs to be done. Thanks!

Original comment by cbe...@gmail.com on 19 Oct 2010 at 2:43

GoogleCodeExporter commented 8 years ago
[deleted comment]
GoogleCodeExporter commented 8 years ago
Thanks cbears, i've tested your changes and now i can see directories not 
created via s3fs, but there are also a lot of errors regarding non-existent 
files which s3fs tries to list, but which really don't exist - it somehow takes 
fragments of the URL and expects those fragments to be files:

$ ls -l /mnt/s3/mybucket_production-s3fs-d705449/mybucket/attachments/
ls: cannot access 
/mnt/s3/mybucket_production-s3fs-d705449/mybucket/attachments/mybuck: No such 
file or directory
ls: cannot access 
/mnt/s3/mybucket_production-s3fs-d705449/mybucket/attachments/mybu: No such 
file or directory
ls: cannot access 
/mnt/s3/mybucket_production-s3fs-d705449/mybucket/attachments/cket: No such 
file or directory
total 1010
-rw-r--r-- 1 root grewej 599858 2008-09-09 10:37 adress_formular.pdf
drwxr-xr-x 1 root root        0 2010-11-02 10:08 avatar_items
drwxr-xr-x 1 root root        0 2010-11-02 10:08 avatar_items
drwxr-xr-x 1 root root        0 2010-11-02 10:08 cartoons
drwxr-xr-x 1 root root        0 2010-11-02 10:08 cartoons
-rw-r--r-- 1 root grewej 153564 2009-06-30 12:26 cc_export.html
drwxr-xr-x 1 root root        0 2010-11-02 10:08 character_friends
drwxr-xr-x 1 root root        0 2010-11-02 10:08 character_friends
drwxr-xr-x 1 root grewej      0 2010-05-29 19:08 character_teasers
drwxr-xr-x 1 root grewej      0 2010-05-29 19:08 character_teasers
?????????? ? ?    ?           ?                ? mybu
?????????? ? ?    ?           ?                ? mybuck
drwxr-xr-x 1 root root        0 2010-11-02 10:08 content_elements
drwxr-xr-x 1 root root        0 2010-11-02 10:08 content_items
drwxr-xr-x 1 root root        0 2010-11-02 10:08 content_items
drwxr-xr-x 1 root root        0 2010-11-02 10:08 customer_communications
drwxr-xr-x 1 root root        0 2010-11-02 10:08 customer_communications
[...]

There are some "folders" in that bucket created via s3fs, and some via a 
different method, so only for some there's the empty file.

Original comment by jan.gr...@gmail.com on 2 Nov 2010 at 9:13

GoogleCodeExporter commented 8 years ago
Hi Jan,

Can you send me the XML that Amazon returned for that directory? To do that you 
can use the attached file; Execute it as something like: 

python http_debugging_proxy.py 8080

Then, in another window (example is bash), set:

export http_proxy=localhost:8080

then mount your s3fs volume. The proxy should have a ton of output. 

Either attach the output, or send it to me. The output that matters should be 
for that directory, and look something like:

<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult 
xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>aspera-bear-test-1</Name><
Prefix></Prefix><Marker></Marker><MaxKeys>50</MaxKeys><Delimiter>/</Delimiter><I
sTruncated>false</IsTruncated><Contents><Key>bar</Key><LastModified>2010-11-19T2
2:26:47.000Z</LastModified><ETag>"d41d8cd98f00b204e9800998ecf8427e"</ETag><Size>
0</Size><Owner><ID>d509af108fd3d43c43f7916533b7856cbcbb72313e662d65ba7243bd66fbe
bbb</ID><DisplayName>awsdev</DisplayName></Owner><StorageClass>STANDARD</Storage
Class></Contents><Contents><Key>foo</Key><LastModified>2010-11-19T22:26:45.000Z<
/LastModified><ETag>"d41d8cd98f00b204e9800998ecf8427e"</ETag><Size>0</Size><Owne
r><ID>d509af108fd3d43c43f7916533b7856cbcbb72313e662d65ba7243bd66fbebbb</ID><Disp
layName>awsdev</DisplayName></Owner><StorageClass>STANDARD</StorageClass></Conte
nts></ListBucketResult>

Thanks,
  Charles

Original comment by cbe...@gmail.com on 19 Nov 2010 at 10:28

Attachments:

GoogleCodeExporter commented 8 years ago

Original comment by dmoore4...@gmail.com on 7 Apr 2011 at 2:26

GoogleCodeExporter commented 8 years ago
this sounds as if its the same problem as I'm having.. I uploaded a suite of 
files to S3 using s3cmd, but when I mount the bucket to my ec2 instance under 
s3fs There are only the files in the top (bucket) directory, no subfolders - 
although these are visible in the S3 management console and from every other 
place I look, e.g. cyberduck.
Would be jolly nice if this worked!

Original comment by JIm.R...@googlemail.com on 9 Dec 2011 at 5:17

GoogleCodeExporter commented 8 years ago
I have this problem too with s3fs-1.61

Original comment by ITparan...@gmail.com on 15 Dec 2011 at 2:30

GoogleCodeExporter commented 8 years ago
output through proxy:

<ListBucketResult 
xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>wbnr</Name><Prefix></Prefi
x><Marker></Marker><MaxKeys>1000</MaxKeys><Delimiter>/</Delimiter><IsTruncated>f
alse</IsTruncated><Contents><Key>1</Key><LastModified>2011-12-15T13:59:28.000Z</
LastModified><ETag>"d41d8cd98f00b204e9800998ecf8427e"</ETag><Size>0</Size><Owner
><ID>250d4afb77615772b6ba5b9406188a3932374e37e52a9d540fce5342c3e99a44</ID><Displ
ayName>omzfgz</DisplayName></Owner><StorageClass>STANDARD</StorageClass></Conten
ts><Contents><Key>lg2011-12-15-14-16-17-19609BC05B44AB82</Key><LastModified>2011
-12-15T14:16:18.000Z</LastModified><ETag>"0961d1784d4f679a0f6824e775523b9c"</ETa
g><Size>7710</Size><Owner><ID>3272ee65a908a7677109fedda345db8d9554ba26398b2ca105
81de88777e2b61</ID><DisplayName>s3-log-service</DisplayName></Owner><StorageClas
s>STANDARD</StorageClass></Contents><Contents><Key>lg2011-12-15-14-16-24-8226A28
45EAD7411</Key><LastModified>2011-12-15T14:16:25.000Z</LastModified><ETag>"0fea9
c812822d93411c86624ba4cb3a8"</ETag><Size>992</Size><Owner><ID>3272ee65a908a76771
09fedda345db8d9554ba26398b2ca10581de88777e2b61</ID><DisplayName>s3-log-service</
DisplayName></Owner><StorageClass>STANDARD</StorageClass></Contents><Contents><K
ey>lg2011-12-15-14-16-36-AF8BA709B14449E8</Key><LastModified>2011-12-15T14:16:37
.000Z</LastModified><ETag>"0b1ae94233cb15a4034b7afbd51ec61f"</ETag><Size>18675</
Size><Owner><ID>3272ee65a908a7677109fedda345db8d9554ba26398b2ca10581de88777e2b61
</ID><DisplayName>s3-log-service</DisplayName></Owner><StorageClass>STANDARD</St
orageClass></Contents><Contents><Key>lg2011-12-15-14-17-25-3B5FBEA6420A0C27</Key
><LastModified>2011-12-15T14:17:26.000Z</LastModified><ETag>"c6961ab7352cb52c6f9
d4c017f45468f"</ETag><Size>2950</Size><Owner><ID>3272ee65a908a7677109fedda345db8
d9554ba26398b2ca10581de88777e2b61</ID><DisplayName>s3-log-service</DisplayName><
/Owner><StorageClass>STANDARD</StorageClass></Contents><Contents><Key>lg2011-12-
15-14-17-49-28CE047A9F5292B9</Key><LastModified>2011-12-15T14:17:50.000Z</LastMo
dified><ETag>"41109ddd768a951a26ce8d2f47cb75e8"</ETag><Size>6404</Size><Owner><I
D>3272ee65a908a7677109fedda345db8d9554ba26398b2ca10581de88777e2b61</ID><DisplayN
ame>s3-log-service</DisplayName></Owner><StorageClass>STANDARD</StorageClass></C
ontents><Contents><Key>lg2011-12-15-14-23-37-A56DFEFE7969BA76</Key><LastModified
>2011-12-15T14:23:39.000Z</LastModified><ETag>"53d74d7efc9658902007273b990d7c29"
</ETag><Size>283</Size><Owner><ID>3272ee65a908a7677109fedda345db8d9554ba26398b2c
a10581de88777e2b61</ID><DisplayName>s3-log-service</DisplayName></Owner><Storage
Class>STANDARD</StorageClass></Contents><Contents><Key>lg2011-12-15-14-30-37-7CB
2E09A9297AED7</Key><LastModified>2011-12-15T14:30:38.000Z</LastModified><ETag>"a
65d91855a21bf887095e2a4f9128f99"</ETag><Size>283</Size><Owner><ID>3272ee65a908a7
677109fedda345db8d9554ba26398b2ca10581de88777e2b61</ID><DisplayName>s3-log-servi
ce</DisplayName></Owner><StorageClass>STANDARD</StorageClass></Contents><CommonP
refixes><Prefix>0/</Prefix></CommonPrefixes><CommonPrefixes><Prefix>1/</Prefix><
/CommonPrefixes><CommonPrefixes><Prefix>2/</Prefix></CommonPrefixes><CommonPrefi
xes><Prefix>3/</Prefix></CommonPrefixes><CommonPrefixes><Prefix>4/</Prefix></Com
monPrefixes><CommonPrefixes><Prefix>5/</Prefix></CommonPrefixes><CommonPrefixes>
<Prefix>6/</Prefix></CommonPrefixes><CommonPrefixes><Prefix>7/</Prefix></CommonP
refixes><CommonPrefixes><Prefix>9/</Prefix></CommonPrefixes><CommonPrefixes><Pre
fix>b/</Prefix></CommonPrefixes><CommonPrefixes><Prefix>c/</Prefix></CommonPrefi
xes><CommonPrefixes><Prefix>d/</Prefix></CommonPrefixes><CommonPrefixes><Prefix>
e/</Prefix></CommonPrefixes></ListBucketResult>

but ls show only
#ls /s3
1                                       lg2011-12-15-14-16-36-AF8BA709B14449E8  
lg2011-12-15-14-23-37-A56DFEFE7969BA76                                       
lg2011-12-15-14-16-17-19609BC05B44AB82  lg2011-12-15-14-17-25-3B5FBEA6420A0C27  
lg2011-12-15-14-30-37-7CB2E09A9297AED7                                       
lg2011-12-15-14-16-24-8226A2845EAD7411  lg2011-12-15-14-17-49-28CE047A9F5292B9

Original comment by ITparan...@gmail.com on 15 Dec 2011 at 2:39

GoogleCodeExporter commented 8 years ago
hm, looks like amazon don't list other folders in response.
there are more directories in web listing

Original comment by ITparan...@gmail.com on 15 Dec 2011 at 2:46

GoogleCodeExporter commented 8 years ago
Same here... I cannot see subfolders through s3fs that are visible within the 
AWS Management Consle and though an S3 Compatible client such as CrossFTP Pro.

Original comment by joshuaol...@gmail.com on 31 Dec 2011 at 3:44

GoogleCodeExporter commented 8 years ago
Same here

Original comment by lorena.p...@gmail.com on 9 Jan 2012 at 4:42

GoogleCodeExporter commented 8 years ago
Same problem here. 

It sounds like there are two approaches for a fix. cbears attempted to get s3fs 
to see into existing directories (which would be ideal). A less-good option 
would be for s3fs to publish a spec for how you define files so that others 
could pile on and build the right tools. 

Does some kind of upload tool already exist? (Mounting a file system for a 
simple upload is pretty heavy, and requires root.)

Original comment by da...@walend.net on 19 Jan 2012 at 9:06

GoogleCodeExporter commented 8 years ago
This is no longer simply a problem of s3fs not being able to read directories 
created by third party clients.  It can't read directories created with the 
'create folder' button in the AWS console, either.  This native S3 directory 
structure is the standard, right?

Original comment by seth.plo...@affectiva.com on 24 Jan 2012 at 11:04

GoogleCodeExporter commented 8 years ago
Seth, AWS console is at least a defacto standard, and will be the eventual 
winner. If s3fs is to support anything beyond what it does now, AWS console 
behavior should be top priority. (I didn't find anything describing what AWS 
console was doing, but I didn't look too deeply.)

Original comment by da...@walend.net on 26 Jan 2012 at 3:49

GoogleCodeExporter commented 8 years ago
This would be an excellent enhancement if S3 looked at folders the same way 
that the S3 console creates them.

Original comment by bi...@mypatientcredit.com on 26 Jan 2012 at 7:36

GoogleCodeExporter commented 8 years ago
I am dumbfounded as to why folder contents do not mirror what is created via 
the S3 interface on Amazon.  Why would anyone use this if you can't use the 
existing tools to work with the data?

Original comment by ixo...@gmail.com on 11 Mar 2012 at 11:04

GoogleCodeExporter commented 8 years ago
would have been nice to find this before i went to the trouble of compiling 
fuse so i could compile this and abandon the package management standards. 
sigh, either way, thx to the devs, perhaps ill give s3backer a whirl.

Original comment by slatt...@gmail.com on 20 Apr 2012 at 2:39

GoogleCodeExporter commented 8 years ago
I am also facing the same issue. Its strange as s3fs is of very little use then 
if one cant access subdirectories/files created by S3 Console or S3cmd

Original comment by sksa...@gmail.com on 30 Apr 2012 at 9:47

GoogleCodeExporter commented 8 years ago
Does anyone know of a scriptable set of actions I can perform on a bucket to 
make the content s3fs readable? Unlike others on this thread I don't need the 
ability to work back and forth, I just need the ability to migrate existing 
buckets to being accessible via s3fs in my EC2 instance. Appreciate any help 
anyone can offer with this. 

Original comment by jazriel...@jewelry.com on 30 Apr 2012 at 9:46

GoogleCodeExporter commented 8 years ago
jazriel, you can use s3fs to create the directory (mkdir /your/missing/dir), 
and then the contents will be viewable.  So you could use another tool 
(python's boto, java's jets3t) to recursively find the directory names and 
create the directories via the filesystem.

Original comment by seth.plo...@gmail.com on 30 Apr 2012 at 10:37

GoogleCodeExporter commented 8 years ago
Thanks so much, Seth! Worked like a charm :-)

Original comment by jazriel...@jewelry.com on 30 Apr 2012 at 11:11

GoogleCodeExporter commented 8 years ago
One strange issue when doing this: ls -al lists the directory name as a file 
contained in the directory. For example, inside the path ......./xsd I see:

---------- 1 root root                 3596 Jan 24 02:14 jcomfeedschema.1.1.xsd
---------- 1 root root                 3655 Mar 19 16:02 jcomfeedschema.1.2.xsd
---------- 1 root root 18446744073709551615 Dec 31  1969 xsd

What is that and how do I correct?

Original comment by jazriel...@jewelry.com on 30 Apr 2012 at 11:21

GoogleCodeExporter commented 8 years ago
is there any chance of merging with s3fs-c so i can have permissions set and 
access files created outside s3fs?

Original comment by tehfl...@gmail.com on 15 May 2012 at 7:52

GoogleCodeExporter commented 8 years ago
is there any chance this can be fixed in the near future?

Original comment by dannyala...@gmail.com on 25 Jul 2012 at 4:24

GoogleCodeExporter commented 8 years ago
Building on Xiangbin's question, I see there hasn't been a commit or comment 
from the developers in almost a year... Dan Moore & Randy Rizun- We *LOVE* s3fs 
(I sure do). Are you still working on this project? The world needs you, guys 
:-)

Original comment by jazriel...@jewelry.com on 25 Jul 2012 at 4:35

GoogleCodeExporter commented 8 years ago
I am having the same problem, create a folder on AWS console and not able to 
see it usign S3FS.

Does anybody got any word from the devs?

Original comment by jefferso...@universidadedoingles.com.br on 19 Aug 2012 at 5:05

GoogleCodeExporter commented 8 years ago
@jefferson- With s3fs, the workaround is that you need to recreate the 
directories using s3fs. It's not so much a bug as an inherent limitation in the 
way s3fs "fakes" having a directory structure in s3. If you mount your bucket 
in s3fs then mkdir the directory you don't see you'll notice it contains all 
the files it's supposed to. Hope that helps.

Original comment by jazriel...@jewelry.com on 19 Aug 2012 at 8:18

GoogleCodeExporter commented 8 years ago
The big trouble I've read between the lines, is people not being able to read 
the contents of folders they import through the S3 console. I've been using an 
easy workaround for that problem that saves me a whole lot of time to create 
hundreds of directories through s3fs:
1. create a folder in AWS console
2. use the console to access that folder and upload your folders through the 
console
3. through s3fs, mkdir the just created folder, and voila:
4. you are now looking at your complete created structure.
5. (optional): move all folders up if need be

Hope that helps a fair bit for you guys.

Original comment by dick.je...@gmail.com on 19 Sep 2012 at 7:43

GoogleCodeExporter commented 8 years ago
As in: "mv mnt/bucket/subfolder/* .* /mnt/bucket"

Original comment by dick.je...@gmail.com on 19 Sep 2012 at 7:47

GoogleCodeExporter commented 8 years ago
In our case we mailed a hard drive to Amazon for the AWS Import service as 
there was too much data to feasibly upload. Amazon's import method, whichever 
they use, yielded data not visible to s3fs. Our plan was to prime the storage 
with the hard drive and then later rsync the incremental updates to keep it up 
to date from then on out.
We actually ran into several problems, I will outline them and their solutions 
below:
1) No visible directories with s3fs
   a. Use s3fs-c (https://github.com/tongwang/s3fs-c) it can see directories created by Amazon
   b. If you must use s3fs, then a possibly better directory creation method is to use rsync to replicate the directory structure like this:
      rsync -rv --size-only -f"+ */" -f"- *" /source/structure/ /s3fs/mounted/structure
      We couldn't use mv because of the enormous amount of data which would have been required to transfer. Rsync is the gentlest folder creation method I could think of.
   Notes:
    - We opted for (a) for the time being. Keeping native with Amazon's method seemed like the best solution.
    - You should test this with dummy data and a dummy bucket to make sure it does what you want it to. Also it may create duplicate 0 byte files as folders because of how s3fs handles things. Again try it and see.

2) Amazon's import did not preserve timestamps
   This was rather frustrating as whatever copy method they used reset all timestamps to the date/time they were copied, thus negating one of rsync's sync options.
   a. The solution was to use --size-only on the rsync

3) s3fs was not allowing rsync to set mtime's on files/folders for us
   Despite commit messages indicating that this has been implemented, it was generating errors on our end
   a. The solution was to use --size-only on the rsync
   Notes:
    - This ended up being a bit moot anyhow as Amazon reset all timestamps on import. It would have taken several million operations to reset all of the timestamps correctly via rsync anyway.

In summary until s3fs implements Amazon's folder method, the best solution for 
us was to use s3fs-c and --size-only on the rsyncs.

Good luck!

Original comment by g.duerrm...@gmail.com on 20 Sep 2012 at 3:09

GoogleCodeExporter commented 8 years ago
This appears to render folders created using s3fs not readable with tntdrive, 
so it's not possible to properly share files on s3 between windows and Linux.

This thread (note the comment by Ivan) has the details.

https://forums.aws.amazon.com/thread.jspa?threadID=58468

Having the ability to create folders in the same as as tntdrive/AWS Console is 
really important. Even an option to do this would be great.

Original comment by franc...@gmail.com on 24 Sep 2012 at 6:38

GoogleCodeExporter commented 8 years ago
I'm getting this with 1.62. Not cool.

Original comment by anacrolix@gmail.com on 24 Jan 2013 at 11:17

GoogleCodeExporter commented 8 years ago
I create the folders implicitly with cp - same problem here: invisible due to 
empty file which is not cleaned up. Using the latest version 1.62. Very not 
cool!

Original comment by iusgent...@gmail.com on 24 Feb 2013 at 11:35

GoogleCodeExporter commented 8 years ago
Ditto using 1.63. Am opening a new ticket.

Original comment by jukow...@gmail.com on 25 Feb 2013 at 5:57

GoogleCodeExporter commented 8 years ago
Hi, all

I try to fix this issue. Please wait for next newer code.
This problem is that s3fs doesn't use CommonPrefixes and s3fs makes directory 
object name.
Other S3 clients uses CommonPrefixes and makes dir object as "dir/".(s3fs makes 
"dir")

regards,

Original comment by ggta...@gmail.com on 27 Feb 2013 at 3:24

GoogleCodeExporter commented 8 years ago
This is a dupe of #27

http://code.google.com/p/s3fs/issues/detail?id=27

Original comment by me@evancarroll.com on 27 Feb 2013 at 10:04

GoogleCodeExporter commented 8 years ago
I uploaded new version v1.64 for this issue.
This new version have a compatibility with other S3 clients.
Please review it.

regards,

Original comment by ggta...@gmail.com on 23 Mar 2013 at 2:43

GoogleCodeExporter commented 8 years ago
This fix looks good, however I'm seeing an issue with permissions for files and 
directories that were created via alternate clients (e.g. the AWS S3 console).

Basically, any folder *or* file created/uploaded via the S3 console has no 
read/write/execute permissions in the mounted file system. If you chmod the 
permissions of the file/folder within the mounted folder then it works fine 
thereafter.

Maybe this should be opened as a new issue?

Original comment by j...@ere.net on 26 Mar 2013 at 2:46

GoogleCodeExporter commented 8 years ago
Hello,

This problem(issue) that the object does not have any 
permission(read/write/execute) is known.
(I want this issue is not new issue.)

The reason is that the folder/file which is made by other S3 clients does not 
have any "x-amz-meta-***"(mtime/mode/uid/gid) headers.
If the object does not have "x-amz-meta-uid(gid)" header, the s3fs decides 
0(zero) as its value.
This value means owner=root(0) and group=root(0), then the object's owner/group 
is root/root.
If the object does not have "x-amz-meta-mtime" header, the s3fs uses 
"Last-Modified" header instead of it.
Then these three header is no problem, if they are not specified.

But the object without "x-amz-meta-mode" header is problem, because this header 
is needed to decide file/folder permission mode.
The mode 0(0000) is no read/write/execute permission, so you can see 
"----------" as "ls" command result.
When user is root(or has root authority), on most unix OS you can do any 
commands(ex. rm/cat/etc) to a object without this header(mode=0000).
As an exception, the folder(directory) object without this header is displayed 
as "d---------", because the s3fs can decide the directory.

I think about this issue, as one of solution the s3fs forces deciding 
"dr-x------"/"-r--------" mode for these folder/file object.
But I think we do not need to do it.
Do you think about this idea for no mode header?

Regards

Original comment by ggta...@gmail.com on 27 Mar 2013 at 2:08

GoogleCodeExporter commented 8 years ago
I'm using the Amazon auto deletion of objects. When I did it, my folders 
created more than 30 days ago, lost their Linux owner and groups, and their 
read/write/execute permissions (was both resetted to root, UID=0).

Maybe because Amazon deleted the file s3fs use to store these info?

I think can bypass the problem specifying deletion rules in a more precise way, 
specifying what files shouldn't be deleted. But I need to know where s3fs 
stores these special "files" and their names.

Anyone may tell me that info?

Original comment by goo...@webcompanysas.com on 9 Jun 2015 at 11:30