Closed seaders closed 10 years ago
I solved the problem by modifying some kernel settings (debian squeeze).
$ cat /proc/sys/net/ipv4/tcp_rmem && cat /proc/sys/net/ipv4/tcp_wmem
4096 87380 512000
4096 16384 512000
I am experiencing this issue with the latest code just checked out from GitHub.
If HTTPS is activated via "s3cmd --configure", then I get error 104 (Connection reset by peer) errors. Otherwise I get error number 32 (Broken pipe).
So far I haven't been able to upload one file using s3cmd.
I am using Ubuntu 12.04 LTS on a 64bit Amazon EC2 machine in the same availability zone as my S3 bucket is (EU-West-1, a.k.a Ireland), Python 2.7.3. My output for the above proc commands is
4096 87380 3870720
4096 16384 3870720
(And I've successfully checked if I can put to the bucket using a different tool. So permission errors are unlikely.)
EDIT: Modifying tcp_rmem and tcp_wmem (whatever that is) actually helped. Here are the instructions: http://scie.nti.st/2008/3/14/amazon-s3-and-connection-reset-by-peer/
Given the comments above, I'm going to close this. Please reopen if failures continue using current upstream master branch.
I am having this issue as well, and am using the latest master of s3cmd. I'm trying to sync, and the file is 580kb in size.
INFO: Summary: 1 local files to upload, 0 files to remote copy, 0 remote files to delete
_public/js/app.js.map -> s3://xxxxx.example.com/js/app.js.map [1 of 1]
20480 of 593498 3% in 0s 36.71 MB/s failed
WARNING: Upload failed: /js/app.js.map ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.00)
WARNING: Waiting 3 sec...
_public/js/app.js.map -> s3://xxxxx.example.com/js/app.js.map [1 of 1]
204800 of 593498 34% in 2s 95.00 kB/s failed
WARNING: Upload failed: /js/app.js.map ([Errno 104] Connection reset by peer)
WARNING: Retrying on lower speed (throttle=0.01)
WARNING: Waiting 6 sec...
_public/js/app.js.map -> s3://xxxxx.example.com/js/app.js.map [1 of 1]
135168 of 593498 22% in 2s 62.86 kB/s failed
WARNING: Upload failed: /js/app.js.map ([Errno 104] Connection reset by peer)
WARNING: Retrying on lower speed (throttle=0.05)
I tried increasing my kernel tcp buffers as well, and enabling/disabling HTTPS in my .s3cmd doesn't help. I have no real idea where to go from now...
EDIT: well I wasn't able to reproduce, a few hours later. Maybe it was in relation with some of the permissions in the s3 bucket...
This issue should now be fixed in master. Please give it a try.
I had this issue too with a 2MB database backup file from Cape Town to Ireland. Using the latest code in the master branch fixed it today.
Please will you (repository owners) publish this fix to the Sourceforge release (https://sourceforge.net/projects/s3tools/files/s3cmd), or if you no longer maintain that, then close the Sourceforge page.
Otherwise it's confusing for people who download the 'latest' release from a year ago and encounter bugs. Our team spent a day trying to investigate what could be going wrong with our cron and network settings before we found this issue.
([Errno 104] Connection reset by peer)
on Ireland.
US - no problem.
4 years, no solution, cheers.
@kac- it is not because you have a similar error message that it is the same issue.
Did you try to use the latest MASTER version? Can you try to give use a debug log (run the command with "-d") for your case that has the issue (ireland?).
Currently automating a build script to push resources to our s3 bucket. Nothing too complicated, and I had done most of the testing out of office, but as soon as I got into there, the whole thing started to fall apart, and I'm crazy confused why.
A simple command like (with both 'mybucket' existing on s3 and 'file.ext' existing in the directory I'm running the command from),
s3cmd put file.ext s3://mybucket/
was failing with either
[Errno 104] Connection reset by peer
or
[Errno 32] Broken pipe
I know there's an issue with s3 with files over 5GB of size, but these files are nowhere near that, they're less than 1MB, never mind more than 1GB. The really weird thing was that another program, http://www.bucketexplorer.com/ worked perfectly, doing the exact same operations, on the same network.
What was weirder still was to test everything out, I tethered my laptop to my phone's 3G connection, and straight away everything worked perfectly again, and when I got home, and tested the commands again there, it worked perfectly again.
Any idea as to what might be causing this error on our work network, with s3cmd, but not Bucket Explorer?