JackYeh / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

rsync to s3fs fails silently #97

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?

rsync a large number of music files to a s3fs mountpoint.  let it run, after a 
few hours, it stops without an error message to stderr (is this rsync's 
problem) or a failed return code

What is the expected output? What do you see instead?

i expect the backup to complete normally or return an error code (which I can 
retry based off of)

What version of the product are you using? On what operating system?

r188 i believe...maybe 191...can't find a version option in the s3fs binary.

Please provide any additional information below.

in /var/log/messages right before stopping:

Aug 25 05:44:52 fs1 s3fs: upload path=/shared/Music/090302/N.E.R.D. - Seeing 
Sounds [2008]/.N.E.R.D. - You Know What.mp3.aAbX7H size=10844953
Aug 25 05:45:35 fs1 s3fs: ###the operation was aborted by an application 
callback
Aug 25 05:45:35 fs1 s3fs: ###retrying...
Aug 25 05:45:37 fs1 s3fs: ###couldn't connect to server
Aug 25 05:45:37 fs1 s3fs: ###retrying...
Aug 25 05:45:39 fs1 s3fs: ###couldn't connect to server
Aug 25 05:45:39 fs1 s3fs: ###retrying...
Aug 25 05:45:39 fs1 s3fs: ###giving up

Original issue reported on code.google.com by webmona...@gmail.com on 26 Aug 2010 at 4:16

GoogleCodeExporter commented 8 years ago
happened again last night...the log entry in /var/log/messages is the same, 
this time, rsync actually hangs...running it with -vvvv, i see nothing terribly 
interesting in the logs:

sender finished shared/Music/Original/flac/Death Cab For Cutie/The Photo 
Album/02 A Movie Script Ending.flac
data recv 32768 at 7045120
recv_generator(shared/Music/Original/flac/Death Cab For Cutie/The Photo 
Album/03 We Laugh Indoors.flac,1705)
data recv 32768 at 7077888
data recv 32768 at 7110656
send_files(1705, shared/Music/Original/flac/Death Cab For Cutie/The Photo 
Album/03 We Laugh Indoors.flac)
count=0 n=0 rem=0
send_files mapped shared/Music/Original/flac/Death Cab For Cutie/The Photo 
Album/03 We Laugh Indoors.flac of size 36829351
calling match_sums shared/Music/Original/flac/Death Cab For Cutie/The Photo 
Album/03 We Laugh Indoors.flac
shared/Music/Original/flac/Death Cab For Cutie/The Photo Album/03 We Laugh 
Indoors.flac

any suggestions?  

thanks!

cheers, eric

Original comment by webmona...@gmail.com on 26 Aug 2010 at 4:36

GoogleCodeExporter commented 8 years ago
s3fs has issues with uploads on large files  In this case, the 30+MB file 
doesn't seem like it should present much of a problem, but unfortunately, it 
sometimes does.

Merging with issue #142  Issue #142 is an attempt to resolve issues associated 
with large file uploads (> 20MB) by implementing the multipart upload feature 
of S3

It is nearing rollout for "beta" testing and has successfully copied "huge" (> 
2GB) files to an S3 bucket.  There still appears to be some issues with rsync 
though -- the data gets there but incurs a timeout on the subsequent mtime 
update initiated by rsync.

My experiments show that giving s3fs a large read_write_timeout option (120s as 
opposed to the default 10s) alleviates the issue.

Also, your s3fs version is quite old, relatively speaking.  Keep an eye out for 
a new version and try that. Or you can update to a more current version and 
give the read_write_timeout option a shot.

Original comment by dmoore4...@gmail.com on 27 Dec 2010 at 8:58