derrickchoi / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

`cp -p` and `mv` only preserves owner, mode; fails to copy date/timestamp #172

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Have a file some hours or days old on a local disk with different ownership 
and a non-default file mode
2. Use `cp` to copy that file to a mounted S3 bucket
3. Use `cp -p` to copy that file (with a different name) to a mounted S3 bucket
4. Use `mv` to move that file (with a different name) to a mounted S3 bucket

What is the expected output? What do you see instead?
2. Should have default ownership and mode and current date/timestamp (and it 
does)
3. Should have the same ownership, mode and date/timestamp as original file (it 
has the same ownership and mode, but the current date/timestamp)
4. Should have the same ownership, mode and date/timestamp as original file (it 
has the same ownership and mode, but the current date/timestamp)

What version of the product are you using? On what operating system?
1.40 on OpenSUSE 11.4 running Fuse 2.8.4

Please provide any additional information below.
The Wiki mentioned that timestamps could be changed now, and a comment 
indicated that `cp -p` specifically seemed to work. However, neither `cp -p` 
nor `mv` appear to preserve timestamps for me.

Original issue reported on code.google.com by beamerb...@gmail.com on 16 Mar 2011 at 10:20

GoogleCodeExporter commented 8 years ago
Confirmed.

Interesting problem, I found a fix but I'm unsure what other new problems might 
be introduced by the fix.

Here is the sequence of commands that FUSE issues upon a "mv":

getattr[path=/object]
create[path=/object][mode=33152][flags=32961]
getattr[path=/object]
write[path=/object]
utimens[path=/object][mtime=1274410389]
getattr[path=/object]
chmod[path=/object][mode=33188]
getattr[path=/object]
flush[path=/object][fd=5]
release[path=/object][fd=5]

The "utimens" is correct and holds the mtime of the old object. The
mtime gets uploaded to S3. The problem is the last "flush". The current
code updates the mtime -- that's why the mtime is getting corrupted.

That final flush isn't aware that it is coming from a mv and can't
distinguish this call to flush from any other.

The fix that I came up with is to remove the update to mtime in the
s3fs_flush command:

--- src/s3fs.cpp    (revision 338)
+++ src/s3fs.cpp    (working copy)
@@ -2403,8 +2403,8 @@
       }
     }

-    mtime = time(NULL);
-    meta["x-amz-meta-mtime"] = str(mtime);
+     mtime = time(NULL);
+    // meta["x-amz-meta-mtime"] = str(mtime);

     // force the cached copy to have the same mtime as the remote copy
     if(use_cache.size() > 0) {

BUT, this probably messes up Ben's subsequent code for the file cache.

Ben, can you test this change with the cache?

Can anyone think of what else might get messed up with this proposed
change?

Original comment by dmoore4...@gmail.com on 25 Mar 2011 at 4:25

GoogleCodeExporter commented 8 years ago
Just chiming in, I'll take a look this AM.

Original comment by ben.lema...@gmail.com on 25 Mar 2011 at 3:40

GoogleCodeExporter commented 8 years ago
Anyone want to double check this against trunk (v1.53). I'm not seeing the 
issue:

$./src/s3fs -f -d -o allow_other -o max_stat_cache_size=200000 -o 
default_acl=public-read -o noatime -o use_cache=/tmp thebucket /mnt/cloudfront
$ cd /mnt/cloudfront/test
$ ls -la
  -rw-r--r-- 1 ben ben 0 2011-03-10 13:26 asdfadf
$ mv asdfadf blah
$ ls -la
  -rw-r--r-- 1 ben ben 0 2011-03-10 13:26 blah

Foreground output from `mv`:

s3fs_getattr[path=/test]
    stat cache hit [path=/test] [hit count=4]
s3fs_getattr[path=/test/blah]
s3fs_getattr[path=/test/asdfadf]
    stat cache hit [path=/test/asdfadf] [hit count=7]
s3fs_getattr[path=/test/blah]
s3fs_getattr[path=/test/blah]
rename[from=/test/asdfadf][to=/test/blah]
s3fs_getattr[path=/test/asdfadf]
    stat cache hit [path=/test/asdfadf] [hit count=8]
rename_object[from=/test/asdfadf][to=/test/blah]
    calling get_headers [path=/test/asdfadf]
   put_headers[path=/test/blah]
      copying[path=/test/blah]
unlink[path=/test/asdfadf]
    delete_stat_cache_entry[path=/test/asdfadf]

Original comment by ben.lema...@gmail.com on 25 Mar 2011 at 4:06

GoogleCodeExporter commented 8 years ago
Ben, you won't see the issue when mv an object (renaming) that resides on a 
bucket mount. Try mv'ing something old from your home directory (assuming that 
it isn't a bucket mount itself) to the bucket.

Original comment by dmoore4...@gmail.com on 25 Mar 2011 at 4:12

GoogleCodeExporter commented 8 years ago
Doh, thanks.

Removing the update to meta["x-amz-meta-mtime"] will definitely break both the 
cache and file mtimes all together. Did this really ever work? Looking back a 
few revisions (pre mtime caching). The same call `meta["x-amz-meta-mtime"] = 
str(time(NULL))` was still in place.

I'll keep thinking on this one, it'd certainly be nice to be able to preserve 
the mtime.

Original comment by ben.lema...@gmail.com on 25 Mar 2011 at 4:54

GoogleCodeExporter commented 8 years ago
This never worked that I can see. I too looked back in time and saw that this 
"feature" has always been there.

Obviously, some use models care about this. With my use model of s3fs, I 
probably wouldn't ever see it (or care).

Ideally, I think that s3fs should functionally operate (from a macroscopic 
view) just like a native file system.  

I can see that my "fix" would break caching, but I'm not yet convinced that it 
would break mtimes "all together" -- can you give an example?

Original comment by dmoore4...@gmail.com on 25 Mar 2011 at 5:19

GoogleCodeExporter commented 8 years ago
Yeah, I agree, in my eyes s3fs should be as much like a native fs as possible.

On breaking mtime all together, I haven't done extensive testing but - 
s3fs_flush, truncate and symlink are the only methods which call 
put_local_fd(), the function takes [x-amz-*] metadata and then ultimately 
uploads the file to s3. So, if x-amz-meta-mtime never makes it to s3, we won't 
know the mtime when we next time we retrieve the file.

Original comment by ben.lema...@gmail.com on 25 Mar 2011 at 5:34

GoogleCodeExporter commented 8 years ago
Question:

I may not understand the code completely, but here's what it seems like to me: 
utimens is setting the mtime and then flush is also trying to set the mtime. 
So, are some features relying on flush to set the mtime, while others set the 
mtime themselves? If so, wouldn't a better pattern be one of A) for all 
features to set the mtime and for flush never to set the mtime, or B) for no 
features to ever set the mtime, but for flush to always set the mtime, and 
potentially set it to a value other than time(NULL) if a previous call 
dictates? Does that make sense?

Nick

Original comment by beamerb...@gmail.com on 25 Mar 2011 at 8:56

GoogleCodeExporter commented 8 years ago
Seems as though you understand it well enough.  I think that the solution lies 
in proposal A) which is what my "fix" does.  We do know that this breaks at 
least some aspect of the file cache mechanism.  ...but if don't use a file 
cache, then its unclear if anything else is broken by my proposed fix -- I 
would be surprised if anything else isn't broken.

The problem lies in knowing what the previous operation on the object was.  The 
flush has to be aware of the preceding utimens to not update mtime.  I can 
visualize a clunky solution to this, but that might be rife with issues itself. 
 ... I would just rather my "fix" not having any other bad side effects other 
than the cache problem.

Original comment by dmoore4...@gmail.com on 25 Mar 2011 at 9:13

GoogleCodeExporter commented 8 years ago
I'm still a bit worried about option A and the proposed fix: flush() is called 
from fuse every time a file handle is closed and never directly called by s3fs 
itself. IMHO, that's the best place for mtime to be set. Assuming I just append 
some data to a file and close it, the main operations involved are open(), 
write() and release(). write() itself is a block operation, so we have no idea 
if all operations on the file handle are complete. At what point would we set 
the mtime?

Original comment by ben.lema...@gmail.com on 25 Mar 2011 at 10:08

GoogleCodeExporter commented 8 years ago
Yep, I found a situation in which my "fix" breaks normal operation:

% ls -l 70
-rw-r--r-- 1 mooredan mooredan 2 Feb  4 17:12 70

% echo hello >> 70

% date
Fri Mar 25 16:59:07 MDT 2011

% ls -l 70
-rw-r--r-- 1 mooredan mooredan 8 Feb  4 17:12 70

Appending to an existing file doesn't update the mtime.

So another solution needs to be found for this problem. Admittedly, I
didn't have high hopes for the fix.

Original comment by dmoore4...@gmail.com on 25 Mar 2011 at 11:02

GoogleCodeExporter commented 8 years ago
How about this idea?

A mv or cp -p operation from a non-bucket to bucket always gets a call to 
utimens.

Upon receiving this utimens, how about setting a flag for the object that is 
then sensed by flush.  The final operation is a release, in release clear the 
flag.

This is the clunky solution, but it might work.  This probably depends on which 
other operations use utimens.

Original comment by dmoore4...@gmail.com on 26 Mar 2011 at 12:15

GoogleCodeExporter commented 8 years ago
Please apply this patch to s3fs.cpp and s3fs.h and see if it solves the problem.

I tried it and it does seem to address the issue correctly.

Ben you may need to make an adjustment to the file cache mechanism though.

Original comment by dmoore4...@gmail.com on 26 Mar 2011 at 2:02

Attachments:

GoogleCodeExporter commented 8 years ago
I got the same issue with both revision 343 and revision 338 with 
issue_172.patch on CentOS 5.5 with Fuse 2.8.4. I used rsync to backup local 
files to mounted S3 bucket. All files on mounted S3 bucket have mtime with 
current date/timestamp ( not be able to preserve the mtime). Therefore rsync 
always overwrite existing files and failed to apply incremental backup. 

Please let me know if you have any idea to solve it.

Thanks,
Tuong

Original comment by huynhvut...@gmail.com on 18 May 2011 at 7:24

GoogleCodeExporter commented 8 years ago
Can someone please double check my results against r346? I ran a scenario as 
described in the original bug report and everything appears to work as 
expected. Did I happen to accidentally fix this or am I just way off-base? I'm 
guessing the latter :)

[LOCAL FS]
ben:~/code/tests/172$ date && touch one two three
Sun Jun 26 17:09:19 MDT 2011
ben:~/code/tests/172$ ls -l
-rw-r--r-- 1 ben ben    0 2011-06-26 17:09 one
-rw-r--r-- 1 ben ben    0 2011-06-26 17:09 three
-rw-r--r-- 1 ben ben    0 2011-06-26 17:09 two

.. a few minutes later ..

[S3FS]
root$:~/code/s3fs/src# ./s3fs <bucket> /mnt/s3fs 
-ouse_cache=/tmp,allow_other,noatime
root$:~/code/s3fs/src# cd /mnt/s3fs/
root$:/mnt/s3fs# ls -l
Sun Jun 26 17:14:52 MDT 2011
total 0

# test case 1 (PASSED): `cp` should have default uid/gid, current timestamp
root$:/mnt/cloudfront# date
Sun Jun 26 17:16:08 MDT 2011
root$:/mnt/s3fs# cp /home/ben/code/tests/172/one .
root$:/mnt/s3fs# ls -la
total 1
-rw-r--r-- 1 root root 0 2011-06-26 17:16 one

# test case 2 (PASSED): `cp -p` should preserve uid/gid and timestamp
root$:/mnt/s3fs# date && cp -p /home/ben/code/tests/172/two .
Sun Jun 26 17:20:03 MDT 2011
root$:/mnt/s3fs# ls -la
total 1
-rw-r--r-- 1 root root 0 2011-06-26 17:16 one
-rw-r--r-- 1 ben  ben  0 2011-06-26 17:09 two

# test case 3 (PASSED): `mv` should preserve uid/gid and timestamp
root$:/mnt/s3fs# date && mv /home/ben/code/tests/172/three .
Sun Jun 26 17:21:21 MDT 2011
root$:/mnt/s3fs# ls -la
total 2
-rw-r--r-- 1 root root 0 2011-06-26 17:16 one
-rw-r--r-- 1 ben  ben  0 2011-06-26 17:09 three
-rw-r--r-- 1 ben  ben  0 2011-06-26 17:09 two

Original comment by ben.lema...@gmail.com on 26 Jun 2011 at 11:27

GoogleCodeExporter commented 8 years ago
Not working correctly with r349

Local FS:

-rw-r--r--   1 root         2235 2011-06-06 21:15 xorg.conf.bak

%  mv xorg.conf.bak /mnt/s3/uswest.suncup.org/

S3FS:

-rw-r--r-- 1 root       2235 2011-07-06 10:05 xorg.conf.bak

...and "cp -p":

> ll ~/HOWTO

-rw-r--r-- 1 mooredan 88 2011-03-27 12:05 /home/mooredan/HOWTO

> cp -p ~/HOWTO /mnt/s3/uswest.suncup.org/

> ll /mnt/s3/uswest.suncup.org/HOWTO

-rw-r--r-- 1 mooredan 88 2011-07-06 10:11 /mnt/s3/uswest.suncup.org/HOWTO

Timestamps are not being preserved.

Original comment by dmoore4...@gmail.com on 6 Jul 2011 at 4:14

GoogleCodeExporter commented 8 years ago
Ben,

The patch in comment #13 still solves this issue:

"cp -p":

> ll hostapd.conf 
-rw-r--r-- 1 mooredan 42161 2011-04-23 09:36 hostapd.conf

> cp -p hostapd.conf /mnt/s3/uswest.suncup.org/

> ll /mnt/s3/uswest.suncup.org/hostapd.conf
-rw-r--r-- 1 mooredan 42161 2011-04-23 09:36 
/mnt/s3/uswest.suncup.org/hostapd.conf

"mv":

> ll enemies 
-rw-r--r-- 1 mooredan 24787 2011-03-19 17:25 enemies

> mv enemies /mnt/s3/uswest.suncup.org/

> ll /mnt/s3/uswest.suncup.org/enemies
-rw-r--r-- 1 mooredan 24787 2011-03-19 17:25 /mnt/s3/uswest.suncup.org/enemies

Timestamps are preserved.

I plan on pushing this out, but wanted to give you a chance to see if caching 
remains as expected. (I personally do not use caching due to my use model of 
s3fs).

Original comment by dmoore4...@gmail.com on 6 Jul 2011 at 5:40

GoogleCodeExporter commented 8 years ago
I am pretty sure we'll completely lose caching (an absolute must-have for my 
usage) if the patch is applied. I haven't worked through the patch much because 
I couldn't reproduce the problem the last time I tried. I'll give it another 
go.. 

Original comment by ben.lema...@gmail.com on 6 Jul 2011 at 5:54

GoogleCodeExporter commented 8 years ago
It looks like you did before, but be sure to mv or cp from a native filesystem 
to the s3fs mount.

Do you think it is possible to add a caching test to the test suite?  Seems 
like it might be a difficult thing to do.

Original comment by dmoore4...@gmail.com on 6 Jul 2011 at 6:03

GoogleCodeExporter commented 8 years ago
Yeah, originally it was my fault - comment 15 is when I last tested and didn't 
see the issue. A test suit for the cache would definitely be difficult, we 
could possibly add some sort of sort of debug output the test scripts could 
watch for.

Original comment by ben.lema...@gmail.com on 6 Jul 2011 at 7:00

GoogleCodeExporter commented 8 years ago
Ok, I think I've found the source of the problem originating in s3fs_flush(). 
Give the attached patch a test against r351.

Original comment by ben.lema...@gmail.com on 7 Jul 2011 at 6:23

Attachments:

GoogleCodeExporter commented 8 years ago
That patch breaks s3fs -- the mount shows an empty directory upon listing. And 
these errors show up in syslog:

Jul  7 13:26:59 horsetooth s3fs: init $Rev: 350 $
Jul  7 13:27:15 horsetooth s3fs: ### CURLE_OPERATION_TIMEDOUT
Jul  7 13:27:16 horsetooth s3fs: ### CURLE_OPERATION_TIMEDOUT
Jul  7 13:27:17 horsetooth s3fs: ###retrying...
Jul  7 13:27:18 horsetooth s3fs: ###retrying...
Jul  7 13:27:27 horsetooth s3fs: ### CURLE_OPERATION_TIMEDOUT
Jul  7 13:27:28 horsetooth s3fs: ### CURLE_OPERATION_TIMEDOUT
Jul  7 13:27:29 horsetooth s3fs: ###retrying...
Jul  7 13:27:30 horsetooth s3fs: ###retrying...
Jul  7 13:27:38 horsetooth s3fs: timeout  now: 1310066858  curl_times[curl]: 
1310066827l  readwrite_timeout: 30
Jul  7 13:27:38 horsetooth s3fs: ### CURLE_ABORTED_BY_CALLBACK
Jul  7 13:27:38 horsetooth s3fs: timeout  now: 1310066858  curl_times[curl]: 
1310066827l  readwrite_timeout: 30
Jul  7 13:27:38 horsetooth s3fs: ### CURLE_ABORTED_BY_CALLBACK
Jul  7 13:27:42 horsetooth s3fs: ###retrying...
Jul  7 13:27:42 horsetooth s3fs: ###giving up
Jul  7 13:27:42 horsetooth s3fs: ###retrying...
Jul  7 13:27:42 horsetooth s3fs: ###giving up
Jul  7 13:27:52 horsetooth s3fs: ### CURLE_OPERATION_TIMEDOUT
Jul  7 13:27:54 horsetooth s3fs: ###retrying...
Jul  7 13:28:04 horsetooth s3fs: ### CURLE_OPERATION_TIMEDOUT
Jul  7 13:28:06 horsetooth s3fs: ###retrying...
Jul  7 13:28:14 horsetooth s3fs: timeout  now: 1310066894  curl_times[curl]: 
1310066863l  readwrite_timeout: 30
Jul  7 13:28:14 horsetooth s3fs: ### CURLE_ABORTED_BY_CALLBACK
Jul  7 13:28:18 horsetooth s3fs: ###retrying...
Jul  7 13:28:18 horsetooth s3fs: ###giving up
Jul  7 13:29:17 horsetooth s3fs: init $Rev: 350 $
Jul  7 13:29:28 horsetooth s3fs: ### CURLE_OPERATION_TIMEDOUT
Jul  7 13:29:30 horsetooth s3fs: ###retrying...
Jul  7 13:29:40 horsetooth s3fs: ### CURLE_OPERATION_TIMEDOUT
Jul  7 13:29:42 horsetooth s3fs: ###retrying...
Jul  7 13:29:50 horsetooth s3fs: timeout  now: 1310066990  curl_times[curl]: 
1310066959l  readwrite_timeout: 30
Jul  7 13:29:50 horsetooth s3fs: ### CURLE_ABORTED_BY_CALLBACK
Jul  7 13:29:54 horsetooth s3fs: ###retrying...
Jul  7 13:29:54 horsetooth s3fs: ###giving up
Jul  7 13:30:04 horsetooth s3fs: ### CURLE_OPERATION_TIMEDOUT
Jul  7 13:30:06 horsetooth s3fs: ###retrying...
Jul  7 13:30:16 horsetooth s3fs: ### CURLE_OPERATION_TIMEDOUT
Jul  7 13:30:18 horsetooth s3fs: ###retrying...
Jul  7 13:30:26 horsetooth s3fs: timeout  now: 1310067026  curl_times[curl]: 
1310066995l  readwrite_timeout: 30
Jul  7 13:30:26 horsetooth s3fs: ### CURLE_ABORTED_BY_CALLBACK
Jul  7 13:30:30 horsetooth s3fs: ###retrying...
Jul  7 13:30:30 horsetooth s3fs: ###giving up

Original comment by dmoore4...@gmail.com on 7 Jul 2011 at 7:55

GoogleCodeExporter commented 8 years ago
Well that's odd, I'm not able to reproduce that at all and pretty confused as 
to how that'd even be the case. What options are you passing to fuse/s3fs?

Original comment by ben.lema...@gmail.com on 7 Jul 2011 at 8:00

GoogleCodeExporter commented 8 years ago
fstab entry:

/usr/local/bin/s3fs#uswest.suncup.org     /mnt/s3/uswest.suncup.org      fuse  
_netdev,allow_other,use_rrs=1 0 0

Original comment by dmoore4...@gmail.com on 7 Jul 2011 at 8:08

GoogleCodeExporter commented 8 years ago
Well I'm at a loss on that one:

ben:~$ cd /mnt/s3/172 && ls
xorg.conf.bak
ben:/mnt/s3/172$ 

root:~/code/s3fs/src# svnversion .
351M
root:~/code/s3fs/src# ./s3fs lemasurier /mnt/s3 -f -d -oallow_other,use_rrs=1
s3fs_check_service
s3fs_getattr[path=/.Trash]
s3fs_getattr[path=/.Trash-1000]
s3fs_getattr[path=/]
s3fs_getattr[path=/172]
    add_stat_cache_entry[path=/172]
access[path=/172]
readdir[path=/172]
list_bucket [path=/172]
    add_stat_cache_entry[path=/172/xorg.conf.bak]
s3fs_getattr[path=/172/xorg.conf.bak]
    stat cache hit [path=/172/xorg.conf.bak] [hit count=0]

[ SYSLOG ]
Jul  7 14:21:24 tuberculosis s3fs: curlCode: 0   msg: No error
Jul  7 14:21:24 tuberculosis s3fs: responseCode: 200
Jul  7 14:21:24 tuberculosis s3fs: URL is 
http://s3.amazonaws.com/lemasurier?location
Jul  7 14:21:24 tuberculosis s3fs: URL changed is 
http://lemasurier.s3.amazonaws.com?location
Jul  7 14:21:24 tuberculosis s3fs: curlCode: 0   msg: No error
Jul  7 14:21:24 tuberculosis s3fs: responseCode: 200
Jul  7 14:21:24 tuberculosis s3fs: init $Rev: 350 $
Jul  7 14:21:26 tuberculosis s3fs: URL is 
http://s3.amazonaws.com/lemasurier/.Trash
Jul  7 14:21:26 tuberculosis s3fs: URL changed is 
http://lemasurier.s3.amazonaws.com/.Trash
Jul  7 14:21:26 tuberculosis s3fs: connecting to URL 
http://lemasurier.s3.amazonaws.com/.Trash
Jul  7 14:21:26 tuberculosis s3fs: connecting to URL 
http://lemasurier.s3.amazonaws.com/.Trash
Jul  7 14:21:26 tuberculosis s3fs: HTTP response code 404
Jul  7 14:21:26 tuberculosis s3fs: HTTP response code 404 was returned
Jul  7 14:21:26 tuberculosis s3fs: Now returning ENOENT
Jul  7 14:21:26 tuberculosis s3fs: Now returning ENOENT
Jul  7 14:21:26 tuberculosis s3fs: URL is 
http://s3.amazonaws.com/lemasurier/.Trash-1000
Jul  7 14:21:26 tuberculosis s3fs: URL changed is 
http://lemasurier.s3.amazonaws.com/.Trash-1000
Jul  7 14:21:26 tuberculosis s3fs: connecting to URL 
http://lemasurier.s3.amazonaws.com/.Trash-1000
Jul  7 14:21:26 tuberculosis s3fs: connecting to URL 
http://lemasurier.s3.amazonaws.com/.Trash-1000
Jul  7 14:21:26 tuberculosis s3fs: HTTP response code 404
Jul  7 14:21:26 tuberculosis s3fs: HTTP response code 404 was returned
Jul  7 14:21:26 tuberculosis s3fs: Now returning ENOENT
Jul  7 14:21:26 tuberculosis s3fs: Now returning ENOENT
Jul  7 14:21:28 tuberculosis s3fs: URL is http://s3.amazonaws.com/lemasurier/172
Jul  7 14:21:28 tuberculosis s3fs: URL changed is 
http://lemasurier.s3.amazonaws.com/172
Jul  7 14:21:28 tuberculosis s3fs: connecting to URL 
http://lemasurier.s3.amazonaws.com/172
Jul  7 14:21:28 tuberculosis s3fs: connecting to URL 
http://lemasurier.s3.amazonaws.com/172
Jul  7 14:21:29 tuberculosis s3fs: HTTP response code 200
Jul  7 14:21:29 tuberculosis s3fs: URL is 
http://s3.amazonaws.com/lemasurier?delimiter=/&prefix=172/&max-keys=1
Jul  7 14:21:29 tuberculosis s3fs: URL changed is 
http://lemasurier.s3.amazonaws.com?delimiter=/&prefix=172/&ma
Jul  7 14:21:29 tuberculosis s3fs: connecting to URL 
http://lemasurier.s3.amazonaws.com?delimiter=/&prefix=172/
Jul  7 14:21:29 tuberculosis s3fs: HTTP response code 200
Jul  7 14:21:29 tuberculosis s3fs: URL is 
http://s3.amazonaws.com/lemasurier/172/xorg.conf.bak
Jul  7 14:21:29 tuberculosis s3fs: URL changed is 
http://lemasurier.s3.amazonaws.com/172/xorg.conf.bak

All seems to work as expected - any chance /usr/local/bin/s3fs is broken/out of 
date? The simple change in s3fs_flush wouldn't even be called until a file was 
opened.

Original comment by ben.lema...@gmail.com on 7 Jul 2011 at 8:21

GoogleCodeExporter commented 8 years ago
Here's an archive of my working copy, just to rule out any possible patch 
issues.

Original comment by ben.lema...@gmail.com on 7 Jul 2011 at 9:21

Attachments:

GoogleCodeExporter commented 8 years ago
Looks good to me.  Sorry, my (iptables) firewall has been flaking out and needs 
an occasional restart.  

Go ahead and commit.  Thanks.  ...looks like you came up with a less convoluted 
solution.

Original comment by dmoore4...@gmail.com on 7 Jul 2011 at 10:01

GoogleCodeExporter commented 8 years ago
Whew, awesome, you had me pretty confused there :) 

commit on the way.

Original comment by ben.lema...@gmail.com on 7 Jul 2011 at 10:07

GoogleCodeExporter commented 8 years ago
This issue was closed by revision r352.

Original comment by ben.lema...@gmail.com on 7 Jul 2011 at 10:09