andriybobyr / s3backer

FUSE-based single file backing store via Amazon S3
GNU General Public License v2.0
0 stars 0 forks source link

Repeated 400 Bad Request errors #3

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
Very often after some errors (HTTP timeout, 500) s3backer starts to
generate "400 Bad Request" errors and locks in that condition (until retry
timeout and give up message).

By using tcpdump I have found the same pattern:

(Some network error - unplugging cable is enough).

20:04:35.281374 IP macbook.58450 > s3.amazonaws.com.http: tcp 365
E...4.@.@.'....eH....R.Pc....e..P...Ss..PUT /du-backup3/macos0000080f HTTP/1.1
Us
20:04:35.603823 IP s3.amazonaws.com.http > macbook.58450: tcp 25
E(.A..@...W.H......e.P.R.e..c...P..&JD..HTTP/1.1 100 Continue

20:04:55.613733 IP s3.amazonaws.com.http > macbook.58450: tcp 630
E(....@...U.H......e.P.R.e.-c...P..&R...HTTP/1.1 400 Bad Request
x-amz-request-id
20:04:55.614898 IP s3.amazonaws.com.http > macbook.58450: tcp 5
H......e.P.R.e..c...P..&9...0

And these messages go until retry timeout.

It looks like s3backer starts PUT request, S3 answers "100 Continue",
nothing happens for 20 seconds and then S3 says "400 Bad Request". S3backer
complaints in syslog, waits and repeats the same pattern.

It happens with s3backer 1.0.4 on Mac OS X 10.5.4

s3backer connect string:

s3backer --prefix=macos --size=75M --filename=<local-file>
--maxRetryPause=5000000 -o daemon_timeout=3600 <bucket> <local-dir>

I am writing file with dd:
dd if=<another local file> of=<local-file on s3backer> bs=4096

tcpdump called like this:
tcpdump -i en1 -A -q 'tcp and (((ip[2:2] - ((ip[0]&0xf)<<2)) -
((tcp[12]&0xf0)>>2)) != 0)'

Original issue reported on code.google.com by dimaulu...@gmail.com on 10 Jul 2008 at 12:21

GoogleCodeExporter commented 8 years ago

Original comment by archie.c...@gmail.com on 10 Jul 2008 at 3:31

GoogleCodeExporter commented 8 years ago
I have reproduced this error and it seems to be fixed by the attached patch. 
Please
test this out and confirm it fixes the problem for you too. Thanks for the bug 
report.

Original comment by archie.c...@gmail.com on 11 Jul 2008 at 1:52

Attachments:

GoogleCodeExporter commented 8 years ago

Original comment by archie.c...@gmail.com on 11 Jul 2008 at 2:21

GoogleCodeExporter commented 8 years ago
Not quite fixed.

Yes, looped 400 errors no longer happen.

But sometimes that block where failure happened is not updated.
And some kind of raw S3 response goes to the log (with error that is ignored).

Bucket is empty and I am writing with dd.

Look at the log:

2008-07-11 15:14:59 INFO: retrying query (attempt #19): PUT
http://s3.amazonaws.com/du-backup4/macos0000003a
<Error><Code>AccessDenied</Code><Message>Access
Denied</Message><RequestId>A0A999C258EC4C0A</RequestId><HostId>qvZCP8r8gna5Cq9Xq
fuJbm9ysFyDmNbRkRKxg7I6DtjTBdGKDka0QuW65iCNGHV9</HostId></Error><?xml
version="1.0" encoding="UTF-8"?>
2008-07-11 15:14:59 DEBUG: success: PUT 
http://s3.amazonaws.com/du-backup4/macos0000003a
   WRITE[0] 4096 bytes
   unique: 15, error: 0 (Unknown error: 0), outsize: 24
unique: 15, opcode: WRITE (16), nodeid: 2, insize: 4160

And after that there is no block macos0000003a in a bucket at all.

The same thing happens with full bucket. File system thinks block has been 
updated
but it is not.

Original comment by dimaulu...@gmail.com on 11 Jul 2008 at 7:30

GoogleCodeExporter commented 8 years ago

Original comment by archie.c...@gmail.com on 11 Jul 2008 at 2:45

GoogleCodeExporter commented 8 years ago
My apologies, previous fix was incomplete. I have committed a better fix in r92.
Please try it out.

Note: changes include a change to configure.ac, which requires regenerating the
autofoo stuff (use autogen.sh). Alternately, you can try this simply by 
downloading
the latest version of s3backer.c from
http://s3backer.googlecode.com/svn/trunk/s3backer.c and then manually adding
"-fnested-functions" to CFLAGS (if using MacOS).

Original comment by archie.c...@gmail.com on 11 Jul 2008 at 4:34

GoogleCodeExporter commented 8 years ago
Yes. No It works better.
But now I can't start it without "-d -f" flags - on first request s3backer 
crashes.
Crash report included.

Original comment by dimaulu...@gmail.com on 12 Jul 2008 at 12:55

Attachments:

GoogleCodeExporter commented 8 years ago

Original comment by archie.c...@gmail.com on 12 Jul 2008 at 2:12

GoogleCodeExporter commented 8 years ago
This time the problem is that Mac OS barfs on GCC nested functions. I've 
refactored
s3backer.c to not use them in r98. Please give it a try. Thanks.

Original comment by archie.c...@gmail.com on 13 Jul 2008 at 9:59

GoogleCodeExporter commented 8 years ago
Looks good! Thanks!

Original comment by dimaulu...@gmail.com on 15 Jul 2008 at 2:48

GoogleCodeExporter commented 8 years ago

Original comment by archie.c...@gmail.com on 15 Jul 2008 at 3:03