yangljun / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

slow reading of files even when cached with getimagesize() in php #409

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
I'm using s3fs as the file system mount for a website that uses a WYSIWYG 
editor with the IMCE browser plugin.

The browser is very slow at reading the images mainly due to it needing to call 
getimagesize() on each file to retrieve the width/height etc. 

To eliminate all other factors I created a simple script that reads a directory 
in the S3 bucket and runs getimagesize() on all of them. 

The directory has 1042 files and the script takes just over 1 minute to run 
initially (as it's creating the tmp images locally i assume) then about 20 
seconds there after.

If I run the script pointing at the tmp folder directly it's lightning fast 
which is how I would have expected it to run once it was all images had been 
downloaded locally and cached when pointing at the mount.

Any advice on this would be great.

I'm using the following to mount:
sudo /usr/bin/s3fs bucket_name /mnt/bucket_name 
-oallow_other,default_acl=public-read-write,uid=33,gid=33,use_cache="/tmp/s3fs",
max_stat_cache_size="100000" (uid/gid 33 = www-data)

Cheers,
Jarrod.

Original issue reported on code.google.com by jar...@headfirst.co.nz on 3 Feb 2014 at 4:36

GoogleCodeExporter commented 9 years ago
I had the same problem. The problem is caused by file stat cache being cleared 
for given file before opening it (to solve Issue 368). As workaround, I 
commented out the stat cache item deletion (see r485), recompiled the package 
and remounted the bucket. Maybe stat cache clearing before opending files can 
be made configurable? Of course, this might introduce inconsistency issues, but 
there are scenarios where file updates are not so common (mostly create/delete 
operations).

Original comment by arvids.g...@gmail.com on 30 Sep 2014 at 3:35

GoogleCodeExporter commented 9 years ago
Hi

I'm sorry for replying too late.

If you edit the file often, s3fs has been kept the stat for all file(but you 
have enough stat cache size).
Then I think your editor up fast after second time.

If you have been had this issue yet, please post new issue on 
Github(https://github.com/s3fs-fuse/s3fs-fuse).
Because we moved this s3fs project to Github, and please use latest version.

I'm going to close this issue.
Please see s3fs-fuse on Github.

Original comment by ggta...@gmail.com on 7 Feb 2015 at 3:35