Closed GoogleCodeExporter closed 8 years ago
Seconded. So far in my S3 testing jungledisk(in compatibility mode) and s3fox
use
the same directory structures and can read each other's buckets without a
problem.
s3fs seems to use a different directory structure that is not compatible with
either.
Original comment by coolbutu...@gmail.com
on 2 Jul 2008 at 12:36
in my bucket there are 5 folders at the root -
LS produces 1 of the folders with _$folder$ attached to the end of it
I can not CD into the folder.
In other buckets I see more of the files, but the folders all have the
_$folder$ and
seem to be impossible to open
Original comment by grigbil...@gmail.com
on 29 Dec 2008 at 6:01
I also need this
Original comment by kkaza...@gmail.com
on 18 Jan 2009 at 12:58
[deleted comment]
[deleted comment]
Yes, this is very important to us. We have many people adding files using s3Fox
and a
java server running on EC2 which cannot recognize the files.
Original comment by gja...@gmail.com
on 7 Aug 2009 at 9:42
I second this. This seems to be the only functional S3 filesystem, which can
interact
with JungleDisk or S3 Organizer - except for the folders.
It would be great if you include this!
Original comment by matthias...@gmail.com
on 8 Feb 2010 at 10:12
Hi, I agree with the other people, this is a very important feature, can put
this in
High priority? regards, from Argentina
Original comment by cdgr...@gmail.com
on 16 Feb 2010 at 3:24
Count my vote too, definitely want to see a compatibility mode which will use
the same node implementation as other common tools. Buckets I've populated with
s3sync.rb are unreadable by s3fs which is unfortunate.
Original comment by Akkar...@gmail.com
on 20 Jul 2010 at 6:27
I have created a patch for s3fs to recognize and use s3fox organizer style
folders. This has an overhead for getattr calls as it does two calls for
getting attributes for a file(used extensively by fuse). So expect degradation
for directories with large files. So use with caution. This is incompatible
with the main branch.
Use at own risk. I am yet to use it for an extended period.
Original comment by shi...@rentoys.in
on 3 Oct 2010 at 6:30
Attachments:
Original comment by dmoore4...@gmail.com
on 7 Apr 2011 at 2:25
for me is the same. if i create a folder with aws console management, this
folder and all inside not is present into s3fs mount point.
and if i create folder manually the same folder inside the mount point, it
create first a zero byte file with the name of folder.
only when i create a file inside this folder. I reach to view all files
(created by s3fs mount point and created by aws console)
Original comment by fabio.ce...@vmengine.net
on 28 May 2011 at 7:45
bunp bump bump.....
please make it compartible!!
Original comment by services...@gmail.com
on 28 May 2011 at 9:34
Don't hold your breath, this isn't something that will be addressed anytime
soon by the current set of developers.
Original comment by dmoore4...@gmail.com
on 29 May 2011 at 2:57
Issue 183 has been merged into this issue.
Original comment by dmoore4...@gmail.com
on 29 May 2011 at 10:06
3 years and no support. We're deprecating s3fs as a usable technology in our
company for this very reason but the lack of compatibility with other tools is
not there. S3Fox still shows folders created via the AWS Management Console
and, while annoying, AWS MC still reads S3Fox-created folders - even if they
display the empty 0-byte file in addition but the fact that this tools cannot
recognize the folders at all is unfortunate. I would love to see this if
there's an update - seems like a minor enhancement but then again, none of us
have done it either. I think I'll just sync with S3 using a cron job and the
Amazon S3 API tools instead.
Original comment by SandowMe...@gmail.com
on 3 Aug 2011 at 10:24
Check out the s3fs fork that is compatible with AWS management Console, s3cmd
and other S3 tools at https://github.com/tongwang/s3fs-c
Original comment by emma.wu....@gmail.com
on 23 Aug 2011 at 12:30
This is a pretty big issue for a lot of people. Not being able to use the s3
mgmt console really sucks.
Original comment by me@evancarroll.com
on 17 Apr 2013 at 3:44
Please see Issue 73, which claims s3fs version 1.64 has been updated to be
compatible with other S3 clients.
Original comment by nicho...@ifactorconsulting.com
on 17 Apr 2013 at 5:57
Hi,
nicholas, Thanks.
But s3fs after v1.64 does not support s3fox yet, these clients make "_$folder$"
suffix object.
I know that this issue is big issue, I try to support.
Regrads,
Original comment by ggta...@gmail.com
on 18 Apr 2013 at 12:27
Hi, all
I updated codes as r413 which is fixed for "_$folder$" directory object.
You can see detail about
r413(http://code.google.com/p/s3fs/source/detail?r=413).
Please try and check it.
Regards,
Original comment by ggta...@gmail.com
on 20 Apr 2013 at 7:24
Hi, all
I updated codes as r414.
The r414 is changed for this issue and no dir object.
You can see detail about r414.
Please try and check it.
Regards,
Original comment by ggta...@gmail.com
on 29 Apr 2013 at 2:34
New version v1.68 supports other S3 client's objects(dir).
I closed this issue.
If you find a same issue, please let me know.
Regards,
Original comment by ggta...@gmail.com
on 1 May 2013 at 4:23
I read this in the build notes,
" After changing the object attributes, the object name does not have
"_$folder$" suffix.
It means the object is remade by s3fs.
"
The *ONLY* client to aws-s3 that uses this convention is s3fs. It's a dead
convention. I wouldn't want anything to revert to this format. The AWS
management console was the last nail in the coffin on this. It went from "no
spec" on how to do this functionality, to "spec by implementation", bucking the
trend isn't an option and any "reverting" to a different format should be in
itself a bug.
Original comment by me@evancarroll.com
on 2 May 2013 at 5:27
Hi,
Please let me confirm this issue.
New s3fs supports "_$folder$" object and the object(dir) which does not exist
and has files and sub-directries.
1) New s3fs can read and recognize these directories.
Is this function the problem?
2) When user changes file(dir) attributes(owner, group, permission, time), new
s3fs changes the object type("_$folder$" or no-existed object) which s3fs
usually makes.
Does your opinion mean that s3fs chould not change object?
I think case-1 is not problem, because s3fs does not change the object(and
type).
The case-2 changes the object and it's type(name).
Both "_$folder$" object and no-existed object do not have their file attribute
which is defined HTTP headers(like "x-amz-meta-***").
But user want to set attributes to the object, then s3fs change(add)s the HTTP
header.
On this case-2, the problems are classified in two kinds.
case 2-1) no-existed object
s3fs must make new object, then s3fs makes the directory object which is usually used by s3fs.
case 2-2) "_$folder$" object
This case was seductive.
But I think there is no problem that the object is changed "_$folder$" type to normal type.
(ex. "dir_$folder$" object is changed to "dir" object)
Because user touchs the dir(object) throw s3fs(and fuse) and other S3 clients can recognize normal type.
As noted above , I judged the new version changes the object(dir) of a special
case.
Please le me know your thought.
Thanks in advance.
Original comment by ggta...@gmail.com
on 8 May 2013 at 7:37
Original issue reported on code.google.com by
pant...@gmail.com
on 15 Apr 2008 at 6:23