Closed GoogleCodeExporter closed 9 years ago
Can you post the relevant portions of your syslog file. I suspect that you're
getting timeout errors. The speed is going to be dependent upon the upload
speed of your internet service (many DSL and cable providers do not have very
fast upload speeds -- just guessing here).
A subsequent rsync should clean things up, that's the beauty of rsync, it
"recovers" well if a connection was broken.
So are you trying to rsync data from one network connected "drive" (EC2) to
s3fs? (just confirming)... using your server as a "go between".
Theoretically, this should work. It may be inconvenient to try, but does the
same thing happen if you first rsync to a local drive from EC2, then rsync to
S3? (rsync may even have an option to do this)
Original comment by dmoore4...@gmail.com
on 1 Dec 2010 at 12:30
Hi,
Both machines are running in Amazon EC2 cloud.
from the console where I have my s3fs mounted:
pwd
/mnts3/vm/temp
ubuntu-in-Amazon$ rsync -a
[myuser]@[my-remote-server-in-amazon]:/mnt3/vm/temp/* .
rsync: failed to set times on "/mnts3/vm/temp/.HAIDemo.7z.001.qNqVbK":
Input/output error (5)
rsync: rename "/mnts3/vm/temp/.HAIDemo.7z.001.qNqVbK" -> "HAIDemo.7z.001":
Input/output error (5)
rsync error: some files/attrs were not transferred (see previous errors)
(code 23) at main.c(1526) [generator=3.0.7]
from syslog
...............
Dec 1 01:06:39 localhost s3fs: ###retrying...
Dec 1 01:06:39 localhost s3fs: ###giving up
Dec 1 01:06:39 localhost s3fs: copy path=/vm/temp/HAIDemo.7z.001
Dec 1 01:06:50 localhost s3fs: ###Operation was aborted by an application
callb
-..................
BTW, I don't believe my file is in this path /vm/temp/.....
my s3fs is mounted in /mnt3fs
Original comment by pingster...@gmail.com
on 1 Dec 2010 at 1:17
Sorry I mean my s3fs is mounted in /mnts3
Original comment by pingster...@gmail.com
on 1 Dec 2010 at 2:17
Do you have a few "###timeout" messages before the "###retrying"? ...I suspect
that you do.
If so, you're getting timeouts from the underlying curl library. By default,
s3fs will attempt (it's either 2 or 3) retries before giving up.
You can try the retries option: "-o retries=10" (...or more) Give that a try
and post a few more lines of your syslog next time.
After that there may be a few more debugging steps that you try, like the -d
option which will dump a bit more info into syslog.
Original comment by dmoore4...@gmail.com
on 1 Dec 2010 at 5:45
been not thing else changes:
rsync from local mounted EBS volume to s3fs folder seems no problem.
Original comment by pingster...@gmail.com
on 9 Dec 2010 at 11:57
Yes, I too see issues with rsync and large files, but no input/output errors or
s3fs messages in syslog (other than the init message).
At this point in time, I'm not sure how to get to the bottom of this.
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken
pipe (32)
rsync: connection unexpectedly closed (165 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(601)
[sender=3.0.7]
Original comment by dmoore4...@gmail.com
on 10 Dec 2010 at 7:25
pingster.wu, what is your upload bandwidth of your internet connection?
If you are using Cable/DSL, then typical upload speeds are 1Mb (megabit not
megabyte) per second.
Assuming you are on a typical residential connection, then, at best case, an
upload of 2TB of data would take 92 days.
Yes, there are issues with large files and there may be something that we can
do about that (like using multipart uploads), but that still won't resolve how
long it will take to back up that amount of data over a typical internet
connection.
I don't know what your use model is, but if you're wanting to use s3fs for
online/offsite backup of that much data, you might be better off buying a
couple of large hard drives, cloning your disks and storing them at some other
location. (I have a friend who actually does this).
Original comment by dmoore4...@gmail.com
on 13 Dec 2010 at 9:38
I too am experiencing problems with a single large file which occurs when rsync
finishes:
building file list ...
1 file to consider
db1.dsk
9663676417 100% 16.09MB/s 0:09:32 (xfer#1, to-check=0/1)
rsync: close failed on "/mnt/myrepo123/os/.db1.dsk.ZfCEVq": Input/output error
(5)
rsync error: error in file IO (code 11) at receiver.c(628) [receiver=2.6.8]
rsync: connection unexpectedly closed (46 bytes received so far) [generator]
rsync error: error in rsync protocol data stream (code 12) at io.c(463)
[generator=2.6.8]
rsync: connection unexpectedly closed (34 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(463)
[sender=2.6.8]
web2.dsk
9663676417 100% 14.28MB/s 0:10:45 (xfer#1, to-check=0/1)
rsync: close failed on "/mnt/myrepo`12/os/.web2.dsk.oFvOPi": Input/output error
(5)
rsync error: error in file IO (code 11) at receiver.c(628) [receiver=2.6.8]
rsync: connection unexpectedly closed (47 bytes received so far) [generator]
rsync error: error in rsync protocol data stream (code 12) at io.c(463)
[generator=2.6.8]
rsync: connection unexpectedly closed (34 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(463)
[sender=2.6.8]
Original comment by jve...@gmail.com
on 15 Dec 2010 at 7:35
Yes, issue #136 and issue #130 are different. This issue can probably be
mitigated by implementing the S3 multipart upload feature. ...don't hold your
breath. ;)
However, the original submitter of this issue has yet to confirm that this is
happening due to large files. Only the average file size of 250MB has been
communicated. s3fs *shouldn't* have too much of a problem with this. However
Amazon recommends that any file > 100 MB use multipart upload.
Original comment by dmoore4...@gmail.com
on 15 Dec 2010 at 8:43
Original comment by dmoore4...@gmail.com
on 19 Dec 2010 at 1:19
Sorry that last statement was a little bit too premature... i got the same
time out errors from time to time.
been not thing else changes:
rsync from local mounted EBS volume to s3fs folder seems no problem.
Thanks,
Comment #4 on issue 130 by moore...@suncup.net: input/output error with
rsync
http://code.google.com/p/s3fs/issues/detail?id=130
Do you have a few "###timeout" messages before the "###retrying"? ...I
suspect that you do.
If so, you're getting timeouts from the underlying curl library. By
default, s3fs will attempt (it's either 2 or 3) retries before giving up.
You can try the retries option: "-o retries=10" (...or more) Give that a
try and post a few more lines of your syslog next time.
After that there may be a few more debugging steps that you try, like the -d
option which will dump a bit more info into syslog.
Original comment by pingster...@gmail.com
on 19 Dec 2010 at 3:34
I believe that this is resolved (at least as well as it is going to be) with
tarball 1.33 which implements multipart upload. To help prevent eventual
consistency errors, use either a uswest or eu bucket. Also watch out for
network timeout errors and be mindful of your connection speed.
If (s3fs) issues are seen using the latest code, please open a new issue.
Original comment by dmoore4...@gmail.com
on 30 Dec 2010 at 7:36
now I am upgrade to 1.4 still seeing timeout:
Apr 20 20:27:33 localhost s3fs: timeout now: 1303331253 curl_times[curl]:
1303331132l readwrite_timeout: 120
Apr 20 20:27:33 localhost s3fs: timeout now: 1303331253 curl_times[curl]:
1303331132l readwrite_timeout: 120
Apr 20 20:27:33 localhost s3fs: ### CURLE_ABORTED_BY_CALLBACK
Apr 20 20:27:37 localhost s3fs: ###retrying...
Apr 20 23:13:55 localhost s3fs: init $Rev: 312 $
Original comment by pingster...@gmail.com
on 20 Apr 2011 at 11:18
This issue is closed and has been determined to be fixed.
It appears that this is a different issue (although the error message may be
the same).
If you want this addressed, please open a new issue, providing all of the
details requested. Thank you.
Original comment by dmoore4...@gmail.com
on 21 Apr 2011 at 12:31
Original issue reported on code.google.com by
pingster...@gmail.com
on 30 Nov 2010 at 6:48