yangljun / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

Bad file descriptor #377

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Detailed description of observed behavior:

When you have too many folders in your s3 bucket, copying a file causes Bad 
file descriptor error.

What steps will reproduce the problem - please be very specific and
detailed. (if the developers cannot reproduce the issue, then it is
unlikely a fix will be found)?

Say, you have s3 bucket contains folder like Test1 and Test2. When you created 
8,000 sub directories in Test2, you can't copy a file to one of those sub 
directories while you can upload files via aws console and create a file under 
both Test1 directory and its sub directories.

===================================================================
The following information is very important in order to help us to help
you.  Omission of the following details may delay your support request or
receive no attention at all.
===================================================================
Version of s3fs being used (s3fs --version):

1.70

Version of fuse being used (pkg-config --modversion fuse):

2.9.1

System information (uname -a):

Linux ip-172-31-22-145 3.4.45-amazon-xen

Distro (cat /etc/issue):

Gentoo

s3fs command line used (if applicable):

/usr/local/bin/s3fs backet_name /path/to/mount/point -o 
rw,allow_other,use_cache=/data/shared/s3-cache,default_acl=public-read,uid=1000,
gid=1000

/etc/fstab entry (if applicable):

s3fs#backet_name /path/to/mount/point s3 fuse 
auto,rw,allow_other,use_cache=/data/shared/s3-cache,default_acl=public-read,uid=
1000,gid=1000 0 0

s3fs syslog messages (grep s3fs /var/log/syslog):

Oct 17 09:32:31 ip-172-31-22-145 s3fs: 644###result=-2
Oct 17 09:32:31 ip-172-31-22-145 s3fs: 2791###result=-9
Oct 17 09:32:31 ip-172-31-22-145 s3fs: 2844###result=-9
Oct 17 09:32:31 ip-172-31-22-145 s3fs: 2872###result=-9

Original issue reported on code.google.com by yagitosh...@gmail.com on 17 Oct 2013 at 12:36

GoogleCodeExporter commented 9 years ago
Hi,

I think I should know detail about this error.
If you can, please run s3fs with "-d or -f" option.
(Please take care, because it puts many log.)

And your folder has many files, so you should specify max_stat_cache_size 
option as over 10000(may be 20000).
But this value will use many memory, please take care for it.

Thanks in advance for your help.

Original comment by ggta...@gmail.com on 28 Oct 2013 at 9:56

GoogleCodeExporter commented 9 years ago
Hi,

This issue is left for a long term, and s3fs project moved on 
Github(https://github.com/s3fs-fuse/s3fs-fuse).
So I closed this issue.

If you have a problem yet, please post new issue.

Regards,

Original comment by ggta...@gmail.com on 23 Dec 2013 at 3:25