yangljun / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

mkdir creating gigantic file #208

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Detailed description of observed behavior:

All I did was do a simple mount to a folder like so:

s3fs my-bucket /mnt/s3/my-bucket

Works.

Then I 'cd /mnt/s3/my-bucket' and 'mkdir somedir'

Now in '/mnt/s3/my-bucket/somedir' there is a file with no permissions that 
reports being 18446744073709551615 bytes and was create Dec 31 1969 when I 'ls 
-lA'

total 1
---------- 1 root root 18446744073709551615 Dec 31  1969 mysql
-rw-r--r-- 1 root root                    0 Jul 15 19:07 test

Holy schmoly! That's 18 exabytes!!!

Also, when I use CloudBerry, this wild file doesn't show up.

This is obviously some crazy error.

===================================================================
Version of s3fs being used (s3fs --version): 1.57

Version of fuse being used (pkg-config --modversion fuse): 2.8.4

System information (uname -a): Linux ip-10-245-78-5 2.6.32-5-xen-amd64 #1 SMP 
Wed Jan 12 05:46:49 UTC 2011 x86_64 GNU/Linux

Distro (cat /etc/issue): Debian GNU/Linux 6.0 \n \l

s3fs command line used (if applicable): See above

/etc/fstab entry (if applicable): NA

s3fs syslog messages (grep s3fs /var/log/syslog):

This is in syslog after using remounting using the -d flag then doing 'ls -lA' 
on said directory:

Jul 15 10:54:35 ip-10-10-10-10 s3fs: init $Rev: 352 $
Jul 15 12:29:01 ip-10-10-10-10 s3fs: init $Rev: 352 $
Jul 15 19:29:39 ip-10-10-10-10 s3fs: init $Rev: 352 $
Jul 15 19:50:02 ip-10-10-10-10 s3fs: curlCode: 0   msg: No error
Jul 15 19:50:02 ip-10-10-10-10 s3fs: responseCode: 200
Jul 15 19:50:02 ip-10-10-10-10 s3fs: URL is 
http://s3.amazonaws.com/my-bucket?location
Jul 15 19:50:02 ip-10-10-10-10 s3fs: URL changed is 
http://my-bucket.s3.amazonaws.com?location
Jul 15 19:50:02 ip-10-10-10-10 s3fs: curlCode: 0   msg: No error
Jul 15 19:50:02 ip-10-10-10-10 s3fs: responseCode: 200
Jul 15 19:50:02 ip-10-10-10-10 s3fs: init $Rev: 352 $
Jul 15 19:50:07 ip-10-10-10-10 s3fs: URL is 
http://s3.amazonaws.com/my-bucket?delimiter=/&prefix=&max-keys=1000
Jul 15 19:50:07 ip-10-10-10-10 s3fs: URL changed is 
http://my-bucket.s3.amazonaws.com?delimiter=/&prefix=&max-keys=1000
Jul 15 19:50:07 ip-10-10-10-10 s3fs: connecting to URL 
http://my-bucket.s3.amazonaws.com?delimiter=/&prefix=&max-keys=1000
Jul 15 19:50:07 ip-10-10-10-10 s3fs: HTTP response code 200
Jul 15 19:50:07 ip-10-10-10-10 s3fs: URL is 
http://s3.amazonaws.com/my-bucket/mysql
Jul 15 19:50:07 ip-10-10-10-10 s3fs: URL changed is 
http://my-bucket.s3.amazonaws.com/mysql
Jul 15 19:50:07 ip-10-10-10-10 s3fs: URL is 
http://s3.amazonaws.com/my-bucket/dev_backup
Jul 15 19:50:07 ip-10-10-10-10 s3fs: URL changed is 
http://my-bucket.s3.amazonaws.com/dev_backup
Jul 15 19:50:10 ip-10-10-10-10 s3fs: URL is 
http://s3.amazonaws.com/my-bucket?delimiter=/&prefix=mysql/&max-keys=1000
Jul 15 19:50:10 ip-10-10-10-10 s3fs: URL changed is 
http://my-bucket.s3.amazonaws.com?delimiter=/&prefix=mysql/&max-keys=1000
Jul 15 19:50:10 ip-10-10-10-10 s3fs: connecting to URL 
http://my-bucket.s3.amazonaws.com?delimiter=/&prefix=mysql/&max-keys=1000
Jul 15 19:50:10 ip-10-10-10-10 s3fs: HTTP response code 200
Jul 15 19:50:10 ip-10-10-10-10 s3fs: URL is 
http://s3.amazonaws.com/my-bucket/mysql/test
Jul 15 19:50:10 ip-10-10-10-10 s3fs: URL changed is 
http://my-bucket.s3.amazonaws.com/mysql/test
Jul 15 19:50:10 ip-10-10-10-10 s3fs: URL is 
http://s3.amazonaws.com/my-bucket/mysql/mysql
Jul 15 19:50:10 ip-10-10-10-10 s3fs: URL changed is 
http://my-bucket.s3.amazonaws.com/mysql/mysql

Original issue reported on code.google.com by shrimpwa...@gmail.com on 15 Jul 2011 at 11:57

GoogleCodeExporter commented 9 years ago
Cannot duplicate.

 > s3fs misc.suncup.org misc.suncup.org
 > cd misc.suncup.org/
 > mkdir somedir
 > ls -lA somedir
total 0
 > cd somedir
 > ls -lA
total 0

> s3fs --version
Amazon Simple Storage Service File System 1.57

So where did these two files "mysql" and "test" come from?  Were these created 
by some other S3 client?

Original comment by dmoore4...@gmail.com on 17 Jul 2011 at 8:28

GoogleCodeExporter commented 9 years ago
To answer your first question, I don't know. Your second question, no, they 
were not created by any client. I was just looking at them in CloudBerry. If I 
was to take a wild guess, I'm assuming the problem is when it lists the bucket 
objects. I'm pretty sure this file doesn't really exist or it would show up in 
CloudBerry too - at least it should.

I failed to mention that I am mounting with the 'allow_others' option.

I will see if I can provide a more precise way of duplicating the error. I too 
am a programmer and I know it is frustrating when there is an error and you 
can't reproduce.

Thank you so much,

Shawn

Original comment by shrimpwa...@gmail.com on 17 Jul 2011 at 10:26

GoogleCodeExporter commented 9 years ago
OH snap! I know what I did!!!

The reason I came across this problem was that the folders I was creating in 
another client, CloudBerry, was not showing up.

To reproduce the error, do the following...

Use CloudBerry or possibly another S3 client. Create a folder in a bucket. Then 
mount that bucket with s3fs  with the same S3 credentials and 'mkdir' using the 
same folder name.

I did this because I needed to get the existing contents to show up.

Why would the existing contents not show up? If it is because I didn't use s3fs 
to originally create those objects, how can I then get them to show up?

Thanks so much!

Shawn

Original comment by shrimpwa...@gmail.com on 18 Jul 2011 at 2:54

GoogleCodeExporter commented 9 years ago
Let me clarify the above...sorry.

CloudBerry: http://cloudberrylab.com/?page=cloudberry-explorer-amazon-s3

In CloudBerry, create a bucket or use an existing one (my-bucket), create a new 
folder (new-folder).

Mount the bucket using 's3fs my-bucket /mnt/mountpoint' with no options and 'cd 
/mnt/mountpoint' into it. Then 'mkdir new-folder', 'cd new-folder', and then 
'ls -la'. You should see a file 'new-folder' that is 18.4 exabytes!

I seriously doubt this is a CloudBerry problem. I don't see how it would be. I 
realize this is 3rd party and you have nothing to do with it but from a 
programmers perspective I don't see how CloudBerry is doing anything wrong. 
Looks like something in s3fs is breaking because of the 'mkdir' on an already 
existing folder object.

Thanks so much.

Shawn

Original comment by shrimpwa...@gmail.com on 18 Jul 2011 at 3:10

GoogleCodeExporter commented 9 years ago
s3fs is not compatible with files created by other S3 clients and vice-versa.  
This is in the FAQ.  ...and issue #27 is a enhancement request to implement 
this (sorry, it probably will not get implemented by the current set of 
developers)

The incompatibility has to do with how directories are represented and how file 
attributes are stored in the object's metadata.

Translating an existing bucket contents to make it compatible with s3fs can be 
done, but most likely is a tedious process.

If you choose to use s3fs for a bucket, start with s3fs and stick with it for 
that bucket.

Original comment by dmoore4...@gmail.com on 18 Jul 2011 at 5:50

GoogleCodeExporter commented 9 years ago
Now wait a sec...

I hear yah. Ok, I get it. For directories, s3fs has to create metadata so it 
can work with said directories. Well, if a directory has already been created 
by another client, why can't 'mkdir' work for just creating the metadata for 
s3fs, which it actually does do. Seems to almost work besides reporting the 
huge file. The may not be a clean solution but should work ok.

Basically, right now it does "work" minus the reporting of the strange file. If 
it just didn't do that, everything would be fine.

Thanks so much!

Shawn

Original comment by shrimpwa...@gmail.com on 18 Jul 2011 at 6:51