Closed GoogleCodeExporter closed 9 years ago
Cannot duplicate.
> s3fs misc.suncup.org misc.suncup.org
> cd misc.suncup.org/
> mkdir somedir
> ls -lA somedir
total 0
> cd somedir
> ls -lA
total 0
> s3fs --version
Amazon Simple Storage Service File System 1.57
So where did these two files "mysql" and "test" come from? Were these created
by some other S3 client?
Original comment by dmoore4...@gmail.com
on 17 Jul 2011 at 8:28
To answer your first question, I don't know. Your second question, no, they
were not created by any client. I was just looking at them in CloudBerry. If I
was to take a wild guess, I'm assuming the problem is when it lists the bucket
objects. I'm pretty sure this file doesn't really exist or it would show up in
CloudBerry too - at least it should.
I failed to mention that I am mounting with the 'allow_others' option.
I will see if I can provide a more precise way of duplicating the error. I too
am a programmer and I know it is frustrating when there is an error and you
can't reproduce.
Thank you so much,
Shawn
Original comment by shrimpwa...@gmail.com
on 17 Jul 2011 at 10:26
OH snap! I know what I did!!!
The reason I came across this problem was that the folders I was creating in
another client, CloudBerry, was not showing up.
To reproduce the error, do the following...
Use CloudBerry or possibly another S3 client. Create a folder in a bucket. Then
mount that bucket with s3fs with the same S3 credentials and 'mkdir' using the
same folder name.
I did this because I needed to get the existing contents to show up.
Why would the existing contents not show up? If it is because I didn't use s3fs
to originally create those objects, how can I then get them to show up?
Thanks so much!
Shawn
Original comment by shrimpwa...@gmail.com
on 18 Jul 2011 at 2:54
Let me clarify the above...sorry.
CloudBerry: http://cloudberrylab.com/?page=cloudberry-explorer-amazon-s3
In CloudBerry, create a bucket or use an existing one (my-bucket), create a new
folder (new-folder).
Mount the bucket using 's3fs my-bucket /mnt/mountpoint' with no options and 'cd
/mnt/mountpoint' into it. Then 'mkdir new-folder', 'cd new-folder', and then
'ls -la'. You should see a file 'new-folder' that is 18.4 exabytes!
I seriously doubt this is a CloudBerry problem. I don't see how it would be. I
realize this is 3rd party and you have nothing to do with it but from a
programmers perspective I don't see how CloudBerry is doing anything wrong.
Looks like something in s3fs is breaking because of the 'mkdir' on an already
existing folder object.
Thanks so much.
Shawn
Original comment by shrimpwa...@gmail.com
on 18 Jul 2011 at 3:10
s3fs is not compatible with files created by other S3 clients and vice-versa.
This is in the FAQ. ...and issue #27 is a enhancement request to implement
this (sorry, it probably will not get implemented by the current set of
developers)
The incompatibility has to do with how directories are represented and how file
attributes are stored in the object's metadata.
Translating an existing bucket contents to make it compatible with s3fs can be
done, but most likely is a tedious process.
If you choose to use s3fs for a bucket, start with s3fs and stick with it for
that bucket.
Original comment by dmoore4...@gmail.com
on 18 Jul 2011 at 5:50
Now wait a sec...
I hear yah. Ok, I get it. For directories, s3fs has to create metadata so it
can work with said directories. Well, if a directory has already been created
by another client, why can't 'mkdir' work for just creating the metadata for
s3fs, which it actually does do. Seems to almost work besides reporting the
huge file. The may not be a clean solution but should work ok.
Basically, right now it does "work" minus the reporting of the strange file. If
it just didn't do that, everything would be fine.
Thanks so much!
Shawn
Original comment by shrimpwa...@gmail.com
on 18 Jul 2011 at 6:51
Original issue reported on code.google.com by
shrimpwa...@gmail.com
on 15 Jul 2011 at 11:57