yibaini / memcached

Automatically exported from code.google.com/p/memcached
0 stars 0 forks source link

memcached refuse to recv data if the client send too much data without recving #384

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
(I’m not sure whether it is a bug or a feature.)

### What steps will reproduce the problem?
1. start a memcached server on port 11211
2. download the snippet: https://gist.github.com/mckelvin/6aaf1d14e7866719a9bc 
and make sure python(2) is available
3. python memcached_reproduce.py

### What is the expected output? What do you see instead?
The reproduce code is expected to exit 0. If the bug occurs, the python process 
should be idle and never times out.

### What version of the product are you using? On what operating system?
CANNOT be reproduced on version 1.4.5(Gentoo), 1.4.13(Ubuntu), 1.4.15(OS X),
CAN be reproduced on version 1.4.17(Gentoo), 1.4.20(OS X).
Not all of the versions between V1.4.5 and V1.4.20 have been tested yet but I 
guess it is introduced in V1.4.16 or V1.4.17 (if it’s a bug).

### Please provide any additional information below.

The issue is in storage commands. The doc says:

> The client sends a command line, and then a data block; after that the client 
expects one line of response, which will indicate success or failure.

What if I send N(N > 1) storage commands at once, and then expects N lines of 
response? The behaviour is not mentioned in the doc, and I’m not sure if it 
is acceptable. If not so, you may close the issue directly, otherwise this 
should be a bug.

code to reproduce: https://gist.github.com/mckelvin/6aaf1d14e7866719a9bc

The issue 
CANNOT be reproduced on version 1.4.5(Gentoo), 1.4.13(Ubuntu), 1.4.15(OS X),
 and CAN be reproduced on version 1.4.17(Gentoo), 1.4.20(OS X). Not all of the versions between V1.4.5 and V1.4.20 have been tested yet but I guess it is introduced in V1.4.16 or V1.4.17 (if it’s a bug).

I know anyway the client should be blamed for sending so much data but refuse 
to receive any thing, but the server doesn't keep this behaviour between these 
versions, that sounds buggy.

Original issue reported on code.google.com by kelvin0...@gmail.com on 20 Nov 2014 at 3:29

GoogleCodeExporter commented 8 years ago
I'm *pretty* sure this affects all versions. I remember very old versions 
definitely being affected by this.

What changed are the clients.. some of them were updated to allow reading some 
response data if sending starts to fail. There's not much the daemon can do in 
this situation (especially for things like binprot where it has to immediately 
write onto the wire). Are you accidentally testing different versions of 
libmemcached as well?

Original comment by dorma...@rydia.net on 30 Nov 2014 at 12:41

GoogleCodeExporter commented 8 years ago
@dorma Actually version of libmemcached among these servers I tested differs 
with each other. So the given test code ( 
https://gist.github.com/mckelvin/6aaf1d14e7866719a9bc ) is 
libmemcached-neglected. It's in pure Python and believe it can be easily ported 
to other languages.

At least this case is not effected:

➜ ~ uname -a Linux ubuntu-test 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux ➜ ~ echo -ne "version\r\n" | nc localhost 11211 VERSION 1.4.13 ➜ ~ md5sum memcached_reproduce.py 7907ffe24977d7327c93dce170618828 memcached_reproduce.py ➜ ~ python memcached_reproduce.py switch: append VERSION 1.4.13

begin send end send begin recv end recv ➜ ~

Original comment by kelvin0...@gmail.com on 30 Nov 2014 at 4:11

GoogleCodeExporter commented 8 years ago
you're not switching versions, you're switching whole operating systems around.

On my system your script doesn't reproduce on the latest branch, .17, or .13. 
If I add an extra zero to the number of commands being sent from the script, 
all of those versions will hang.

This is the same as it's always been; if the event loop can't write more 
responses, it can't parse more data inbound. It would have to start storing 
more output internally somewhere.

Original comment by dorma...@rydia.net on 1 Jan 2015 at 6:51