keroscarel / s3backer

Automatically exported from code.google.com/p/s3backer
GNU General Public License v2.0
0 stars 0 forks source link

maxRetryPause doesn't have desired effect on Leopard (Mac OS X 10.5) #2

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?

1. Mount s3backer file with large "--maxRetryPause" value:
s3backer --prefix=macos --size=75M --filename=s3-backup3-remote.dmg
--maxRetryPause=300000 -d -f bucket mnt-bucket

2. Copy something to that file with dd:
dd if=local-75M-non-empty-file of=mnt-bucket/s3-backup3-remote.dmg bs=4096

3. Shut down network interface.

4. After attempt #8 s3backer exits and file system unmounts.
And at this moment in system.log you have this message:

Jul 10 17:07:28 macbook kernel[0]: MacFUSE: force ejecting (no response
from user space 5)

When starting s3backer prints correct maxRetryPause value and uses it, but
MacFUSE has its own timeout option "daemon_timeout" that has some default 
value. After that timeout Tiger (10.4) shows up "File system timeout"
dialog box with some useful options, but Leopard does not. It just kills
user process and unmounts file system.

So I believe it worth mentioning in man page, that one has to include
"daemon_timeout" option in command line arguments when maxRetryPause has
non-default value. Or may be just set it to some very high value by default.

It is last version (1.04) of s3backer. Mac OS X 10.5.4.

s3backer startup log:

2008-07-10 17:05:36 INFO: created s3backer using
http://s3.amazonaws.com/du-backup3
s3backer: auto-detecting block size and total file size...
2008-07-10 17:05:36 DEBUG: HEAD
http://s3.amazonaws.com/du-backup3/macos00000000
s3backer: auto-detected block size=4k and total size=75m
2008-07-10 17:05:37 DEBUG: s3backer config:
2008-07-10 17:05:37 DEBUG:         accessId: 
2008-07-10 17:05:37 DEBUG:        accessKey: "****"
2008-07-10 17:05:37 DEBUG:       accessFile: "/Users/demon/.s3backer_passwd"
2008-07-10 17:05:37 DEBUG:           access: "private"
2008-07-10 17:05:37 DEBUG:     assume_empty: false
2008-07-10 17:05:37 DEBUG:          baseURL: "http://s3.amazonaws.com/"
2008-07-10 17:05:37 DEBUG:           bucket: "du-backup3"
2008-07-10 17:05:37 DEBUG:           prefix: "macos"
2008-07-10 17:05:37 DEBUG:            mount:
"/Users/demon/mounts/mnt-du-backup3"
2008-07-10 17:05:37 DEBUG:         filename: "s3-backup3-remote.dmg"
2008-07-10 17:05:37 DEBUG:       block_size: - (4096)
2008-07-10 17:05:37 DEBUG:       block_bits: 12
2008-07-10 17:05:37 DEBUG:        file_size: 75M (78643200)
2008-07-10 17:05:37 DEBUG:       num_blocks: 19200
2008-07-10 17:05:37 DEBUG:        file_mode: 0600
2008-07-10 17:05:37 DEBUG:        read_only: false
2008-07-10 17:05:37 DEBUG:  connect_timeout: 30s
2008-07-10 17:05:37 DEBUG:       io_timeout: 30s
2008-07-10 17:05:37 DEBUG: initial_retry_pause: 200ms
2008-07-10 17:05:37 DEBUG:  max_retry_pause: 300000ms
2008-07-10 17:05:37 DEBUG:  min_write_delay: 500ms
2008-07-10 17:05:37 DEBUG:       cache_time: 10000ms
2008-07-10 17:05:37 DEBUG:       cache_size: 10000 entries
2008-07-10 17:05:37 DEBUG: fuse_main arguments:
2008-07-10 17:05:37 DEBUG:   [0] = "s3backer"
2008-07-10 17:05:37 DEBUG:   [1] = "-o"
2008-07-10 17:05:37 DEBUG:   [2] =
"kernel_cache,fsname=s3backer,use_ino,entry_timeout=31536000,negative_timeout=31
536000,attr_timeout=31536000,default_permissions,nodev,nosuid"
2008-07-10 17:05:37 DEBUG:   [3] = "-d"
2008-07-10 17:05:37 DEBUG:   [4] = "-f"
2008-07-10 17:05:37 DEBUG:   [5] = "/Users/demon/mounts/mnt-du-backup3"
2008-07-10 17:05:37 INFO: s3backer process 10403 for
/Users/demon/mounts/mnt-du-backup3 started

And it dies like that:

2008-07-10 17:06:28 DEBUG: PUT http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 NOTICE: HTTP operation timeout: PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 INFO: retrying query (attempt #2): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 ERROR: curl error: couldn't resolve host name
2008-07-10 17:06:59 INFO: retrying query (attempt #3): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:00 INFO: retrying query (attempt #4): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:00 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:02 INFO: retrying query (attempt #5): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:02 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:05 INFO: retrying query (attempt #6): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:05 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:11 INFO: retrying query (attempt #7): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:11 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:24 INFO: retrying query (attempt #8): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:24 ERROR: curl error: couldn't resolve host name

(end of the messages. s3backer exits)

Original issue reported on code.google.com by dimaulu...@gmail.com on 10 Jul 2008 at 9:29

GoogleCodeExporter commented 8 years ago

Original comment by archie.c...@gmail.com on 11 Jul 2008 at 1:53

GoogleCodeExporter commented 8 years ago
Should be fixed in r88. If you get a chance please apply that patch and retest. 
Thanks!

Original comment by archie.c...@gmail.com on 11 Jul 2008 at 2:21

GoogleCodeExporter commented 8 years ago
I believe code in r88:

snprintf(buf, sizeof(buf), "-odaemon_timeout=%u", config.connect_timeout
 + config.io_timeout + config.max_retry_pause / 1000 + 10);

should read:

snprintf(buf, sizeof(buf), "-odaemon_timeout=%u", ( config.connect_timeout
 + config.io_timeout + config.max_retry_pause ) / 1000 + 10);

Because all these parameters are in milliseconds.

And daemon_timeout has a maximum value of 600 seconds.
Look here:

http://www.google.com/codesearch?hl=en&lr=&q=FUSE_MAX_DAEMON_TIMEOUT+package%3Ah
ttp%3A%2F%2Fmacfuse\.googlecode\.com&sbtn=Search

and here:

http://www.google.com/codesearch?q=daemon_timeout+package%3Ahttp%3A%2F%2Fmacfuse
\.googlecode\.com&origq=daemon_timeout&btnG=Search+Trunk

Original comment by dimaulu...@gmail.com on 11 Jul 2008 at 5:11

GoogleCodeExporter commented 8 years ago
Oops! I was wrong! Sorry!

Code is fine.

Original comment by dimaulu...@gmail.com on 11 Jul 2008 at 5:13

GoogleCodeExporter commented 8 years ago
Strange thing, when I give different parameters to maxRetryPause s3backer always
starts with daemon_timeout = 100:

s3backer --prefix=macos --size=75M --filename=s3-backup3-remote.dmg
--maxRetryPause=500000 -d -f du-backup3 /Users/demon/mounts/mnt-du-backup3

2008-07-11 13:13:46 DEBUG:  connect_timeout: 30s
2008-07-11 13:13:46 DEBUG:       io_timeout: 30s
2008-07-11 13:13:46 DEBUG: initial_retry_pause: 200ms
2008-07-11 13:13:46 DEBUG:  max_retry_pause: 500000ms
2008-07-11 13:13:46 DEBUG: fuse_main arguments:

2008-07-11 13:13:46 DEBUG:   [0] = "s3backer"
2008-07-11 13:13:46 DEBUG:   [1] = "-o"
2008-07-11 13:13:46 DEBUG:   [2] =
"kernel_cache,fsname=s3backer,use_ino,entry_timeout=31536000,negative_timeout=31
536000,attr_timeout=31536000,default_permissions,nodev,nosuid,daemon_timeout=100
"

Original comment by dimaulu...@gmail.com on 11 Jul 2008 at 5:16

GoogleCodeExporter commented 8 years ago
On a Leopard "daemon_timeout" has a fixed maximum of 600 seconds.
Everything higher is simply ignored.

May be it is better just to set it to a maximum?

Cause when I am using different values of timeout/retry_pause, s3backer always 
sets
daemon_timeout to 850 seconds which is higher than a maximum and it defaults to 
60
seconds which sometimes is not enough.

Original comment by dimaulu...@gmail.com on 15 Jul 2008 at 2:56

GoogleCodeExporter commented 8 years ago
See changes in r104. We now limit the daemon_timeout setting to
FUSE_MAX_DAEMON_TIMEOUT on MacOS and emit a warning if we would have tried to 
set a
higher value.

Original comment by archie.c...@gmail.com on 15 Jul 2008 at 3:46

GoogleCodeExporter commented 8 years ago
Correction: use r110 instead of r104.

Original comment by archie.c...@gmail.com on 15 Jul 2008 at 6:23

GoogleCodeExporter commented 8 years ago
Thank you! r110 makes sense and works perfect.

Original comment by dimaulu...@gmail.com on 16 Jul 2008 at 2:56

GoogleCodeExporter commented 8 years ago

Original comment by archie.c...@gmail.com on 16 Jul 2008 at 3:34