pradeepgn / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

Error copying large file #354

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
Detailed description of observed behavior:

This seems to be the same problem as described in this issue:
http://code.google.com/p/s3fs/issues/detail?id=248

As suggested there, I'm reposting the issue.  

Errors using rsync (and cp) with a 5.9GB file.  

Rsync error:
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken 
pipe (32)
rsync: write failed on "/mnt/backup/2013-06-28/accounts/XXXX.tar.gz": No space 
left on device (28)
rsync: connection unexpectedly closed (41 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) 
[sender=3.0.6]

What steps will reproduce the problem - please be very specific and
detailed. (if the developers cannot reproduce the issue, then it is
unlikely a fix will be found)?

after mounting s3 on /mnt:
rsync -av /backup /mnt

Smaller files work fine, but one large file (5.9GB) always seems to crash it.  

===================================================================
The following information is very important in order to help us to help
you.  Omission of the following details may delay your support request or
receive no attention at all.
===================================================================
Version of s3fs being used (s3fs --version):
1.71

Version of fuse being used (pkg-config --modversion fuse):
2.8.4
I used these instructions to install, by the way: 
http://code.google.com/p/s3fs/issues/detail?id=143

System information (uname -a):
Linux 2.6.32-358.6.2.el6.x86_64 #1 SMP Thu May 16 20:59:36 UTC 2013 x86_64 
x86_64 x86_64 GNU/Linux

Distro (cat /etc/issue):
CentOS 6.04

s3fs command line used (if applicable):
/usr/bin/s3fs mybucket /mnt

/etc/fstab entry (if applicable):
N/A?

s3fs syslog messages (grep s3fs /var/log/syslog):
no results.   

Original issue reported on code.google.com by BakerSal...@gmail.com on 2 Jul 2013 at 9:52

GoogleCodeExporter commented 8 years ago
Hi,

I'm sorry for replying too late.
Do you have this problem yet?
If you have, please try to use latest version and test it.

Because the error "No left on device" is put, your disk space is not enough for 
s3fs with rsync.
s3fs makes local file descriptor and uses disk space when file modified.

If you allow it, you can use rsync with "--inplace" option.
If you specify this option, rsync does not make temporally file, and i think it 
works good with s3fs.

Thanks in advance for your help.

Original comment by ggta...@gmail.com on 27 Aug 2013 at 10:07

GoogleCodeExporter commented 8 years ago
Hi,

This issue is left for a long term, and s3fs project moved on 
Github(https://github.com/s3fs-fuse/s3fs-fuse).
So I closed this issue.

If you have a problem yet, please post new issue.

Regards,

Original comment by ggta...@gmail.com on 23 Dec 2013 at 3:17