Closed GoogleCodeExporter closed 8 years ago
Hi,
Thanks for reporting a bug.
This issue is a bug in s3fs, and that is fixed at r481 now.
If you can checkout from trunk, please checkout it and test it.
If not, please wait to launch next version.
Best Reagrds,
Original comment by ggta...@gmail.com
on 30 Aug 2013 at 2:29
Tested this morning with the latest from the trunk. (r481)
Same test, still broken.
Original comment by ned.wolp...@gmail.com
on 3 Sep 2013 at 5:22
Not sure how to re-open the issue.... is that possible? or should I submit a
new one?
Original comment by ned.wolp...@gmail.com
on 3 Sep 2013 at 5:24
I should note that the local cache option (use_cache) is not enabled in this
test.
Original comment by ned.wolp...@gmail.com
on 3 Sep 2013 at 10:28
I ran into similar "inconsistency" problem, too.
If the file created on server A was accessed on server B, then any changes made
to the file could not get synced to server B. If server A changes the file
before server B ever accessing it, changes can be seen on server B on the first
access, then the file content becomes sticky, further changes made on A will
not get synced to B.
After adding the "stat_cache_expire=1"0 option, the "ls -l " file size became
consistent, but file content was still not synced.
After removing the "use_cache=/tmp" option, file content was synced.
my host configuration
CentOS release 6.3 (Final)
s3fs 1.73
fuse 2.9.3
Linux <hostname> 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012
x86_64 x86_64 x86_64 GNU/Linux
Original comment by juiker...@gmail.com
on 4 Sep 2013 at 6:04
Not sure how to handle this, as I haven't found a work-around. Also, not sure
if fuse itself is caching some stats. I see the same 'size' behavior regardless
of 'use_cache' is enabled or not. I am also seeing 'use_cache' as a problem
during updates... if the cache file is there, then the s3fs process won't see
the changes at all. This didn't use to happen, so I think some bugs are being
introduced.
Since this bug was closed, marked as fixed, not sure if we can unmark it...
should I make a new bug?
Original comment by ned.wolp...@gmail.com
on 4 Sep 2013 at 2:57
well, not sure if this is a 'workaround' or the correct solution to this
problem.
If we add the following option:
stat_cache_expire=10
then this usecase I list above works as expected, after 10 seconds. For what
I'm doing, this may as well be the correct approach, and I'll have to figure
out what to set that variable to for our usage.
Should I assume that this the correct ?
Original comment by ned.wolp...@gmail.com
on 5 Sep 2013 at 9:58
Hi, all
s3fs has the stat cache for objects, and file descriptor for those.
This issue is occurred when s3fs has both stat and fd cache for target object.
s3fs can update stat cache when listing object, but s3fs does not have a time
for updating when client app opens object directly.
Then "stat_cache_expire=0" or "max_stat_cache_size=0" is correct for solving
this issue.
But I think that s3fs should check the stat of object before opening it.
Thus I re-opened this issue, and start to change the code that s3fs checks the
stat before opening.
Please wait a moment.
Best Regards,
Original comment by ggta...@gmail.com
on 17 Sep 2013 at 4:49
I updated r485 which fixed this problem.
Please checkout new revision and check it.
Thanks in advance for your help.
(I closed this issue, if you find another problem please let me know.)
Original comment by ggta...@gmail.com
on 17 Sep 2013 at 5:18
Original issue reported on code.google.com by
ned.wolp...@gmail.com
on 29 Aug 2013 at 11:30