Closed GoogleCodeExporter closed 9 years ago
As an update, it appears I can pull the reads a bit higher (approaching speeds
of boto), so I'm not worried about that.
However, write speed at 4-7 creates/s is far slower than I get with my own
testing using boto (300 creates/s), so there appears to be an s3fs performance
here.
Deletions (explicit rm) with s3fs are also rather slow at 25 deletes/s versus
boto which can pull 1000+/s
Original comment by usaa...@gmail.com
on 25 Jul 2013 at 7:18
Hi,
Before latest s3fs makes a object, s3fs checks same name object in a directory.
Then I think s3fs makes many object at time, s3fs send many head request to S3.
If s3fs lists same your new object name before making it, s3fs has a cache
which the file does not exist when you specify enable_noobj_cache option.
If this issue reasons many request to S3, I need to think another performance
tuning.
Please check with debug option, and let me know about result.
Thanks in advance for your help.
Original comment by ggta...@gmail.com
on 13 Aug 2013 at 2:25
Hi,
This issue is left for a long term, and s3fs project moved on
Github(https://github.com/s3fs-fuse/s3fs-fuse).
So I closed this issue.
If you have a problem yet, please post new issue.
Regards,
Original comment by ggta...@gmail.com
on 23 Dec 2013 at 3:12
Original issue reported on code.google.com by
usaa...@gmail.com
on 24 Jul 2013 at 2:28