Closed GoogleCodeExporter closed 9 years ago
Reproduce-able and unintended behavior. Good catch. Will investigate a for a
fix soon.
Original comment by dmoore4...@gmail.com
on 16 Jan 2011 at 5:16
Thank you for your quick reply. Not trying to push anything, but do you have
any ETA on a fix? This is blocking my current work, even a ballpark estimate
will help in updating my planning ;)
Original comment by ptrm...@gmail.com
on 17 Jan 2011 at 9:29
This kind of strange. When this was first reported, I tried it out on
my Ubuntu (10.10) laptop and it was reproduce-able. My main development
platform is Debian (sid). The issue cannot be reproduced there:
% echo ABCDEF > test ; echo XYZ > test ; cat test
XYZ
Acts just like it should.
What platform are you using?
Original comment by dmoore4...@gmail.com
on 17 Jan 2011 at 3:07
I'm running Amazon Linux on EC2.
Original comment by ptrm...@gmail.com
on 17 Jan 2011 at 4:40
On my platforms, Debian does not exhibit the issue whereas Ubuntu does. Both
are running fuse 2.8.4
Below is debugging info. Upon the "echo XYZ > test", ordering of commands from
fuse appear in a
different order. Apparently, as a result, the final flush is uploading the
incorrect data.
It's pretty much the reversal of the "truncate" and "open" commands from fuse.
s3fs does not control the order in which these commands come from fuse, it only
responses
to these.
At this point in time, this looks like a fuse issue, at least there's some
explaination
that needs to occur.
It's possible that the first "getattr" is returning some info that is causing
fuse to
act this way. ...more investigation is needed.
Debian (sid) Ubuntu (10.10)
% echo ABCDEF > test % echo ABCDEF > test
s3fs_getattr[path=/test]
s3fs_getattr[path=/test]
s3fs_create[path=/test][mode=33188][flags=33345]
s3fs_create[path=/test][mode=33188][flags=33345]
create_file_object[path=/test][mode=33188] create_file_object[path=/test][mode=33188]
get_local_fd[path=/test] get_local_fd[path=/test]
downloading[path=/test][fd=5] downloading[path=/test][fd=5]
s3fs_getattr[path=/test]
s3fs_getattr[path=/test]
s3fs_flush[path=/test][fd=5]
s3fs_flush[path=/test][fd=5]
put_local_fd[path=/test][fd=5] put_local_fd[path=/test][fd=5]
uploading[path=/test][fd=5][size=0] uploading[path=/test][fd=5][size=0]
s3fs_write[path=/test] s3fs_write[path=/test]
s3fs_flush[path=/test][fd=5]
s3fs_flush[path=/test][fd=5]
put_local_fd[path=/test][fd=5] put_local_fd[path=/test][fd=5]
uploading[path=/test][fd=5][size=7] uploading[path=/test][fd=5][size=7]
s3fs_release[path=/test][fd=5]
s3fs_release[path=/test][fd=5]
% echo XYZ > test % echo XYZ > test
s3fs_getattr[path=/test]
s3fs_getattr[path=/test]
truncate[path=/test][size=0]
open[path=/test][flags=32769]
put_local_fd[path=/test][fd=5] get_local_fd[path=/test]
uploading[path=/test][fd=5][size=0] downloading[path=/test][fd=5]
s3fs_getattr[path=/test]
truncate[path=/test][size=0]
open[path=/test][flags=32769]
put_local_fd[path=/test][fd=6]
get_local_fd[path=/test] uploading[path=/test][fd=6][size=0]
downloading[path=/test][fd=5] s3fs_getattr[path=/test]
s3fs_flush[path=/test][fd=5]
s3fs_flush[path=/test][fd=5]
put_local_fd[path=/test][fd=5] put_local_fd[path=/test][fd=5]
uploading[path=/test][fd=5][size=0] uploading[path=/test][fd=5][size=7]
s3fs_write[path=/test] s3fs_write[path=/test]
s3fs_flush[path=/test][fd=5]
s3fs_flush[path=/test][fd=5]
put_local_fd[path=/test][fd=5] put_local_fd[path=/test][fd=5]
uploading[path=/test][fd=5][size=4] uploading[path=/test][fd=5][size=7]
s3fs_release[path=/test][fd=5]
s3fs_release[path=/test][fd=5]
Original comment by dmoore4...@gmail.com
on 18 Jan 2011 at 4:29
So here's a workaround that probably will work: if the file exists before you
decide to redirect some output, just delete it. e.g.:
if [ -e test ]; then
rm test
fi
I do want to caution folks about their use models of s3fs. Although from a
high level view, it looks like a normal file system, it doesn't always act that
way -- here's another example.
s3fs works very well as a conduit to a storage system (which is what S3 intends
to be). Using it as a "dynamic" file is is asking for trouble, in my opinion.
Personally, I would have never stumbled upon this issue since 99+% of my access
is through rsync.
Original comment by dmoore4...@gmail.com
on 19 Jan 2011 at 12:19
OK, I've just about got it figured out. This apparently has to do with the
"atomic_o_trunc" capabilities of the kernel. The open command can be called
with a flag set that tells the function that the file should be truncated.
This doesn't happen by default for some reason, but if I start fuse with this
option, then the flag gets set. (I think that there is a way to recognize this
capability on the fly and the need for the option goes away).
Anyway, by recognizing the flag in s3fs_open() and if it is set, calling
s3fs_truncate(), the problem goes away:
% rm misc.suncup.org/test
% echo ABCDEF > misc.suncup.org/test
% echo XYZ > misc.suncup.org/test
% cat misc.suncup.org/test
XYZ
% cat /etc/issue
Ubuntu 10.10 \n \l
Original comment by dmoore4...@gmail.com
on 19 Jan 2011 at 4:29
Try this patch file and see if it fixes your problem.
Original comment by dmoore4...@gmail.com
on 19 Jan 2011 at 4:55
Attachments:
Thanks for the patch, the behavior is as expected now!
Original comment by ptrm...@gmail.com
on 19 Jan 2011 at 11:53
I have some communication going on the fuse-devel mailing list and have tested
the fix on Debian (sid), CentOS and Ubuntu (10.10).
After some more testing and consultation, I'll roll this fix out into a tarball.
Original comment by dmoore4...@gmail.com
on 19 Jan 2011 at 6:05
Resolved with 1.35
Original comment by dmoore4...@gmail.com
on 21 Jan 2011 at 5:19
Original issue reported on code.google.com by
ptrm...@gmail.com
on 16 Jan 2011 at 2:48