Jdesk / memcached

Automatically exported from code.google.com/p/memcached
0 stars 0 forks source link

Patch: connection queue simplification #62

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
When I read memcached source to undertand how it works I have found
excessively over-complicated code for new connection dispatching with using
of mutexes and signal pipe.

I rewrite the connection queue code (thread.c) in order to eliminate
connection queue structure and replacing it by more simple sending of
connection info via signal pipe.

P.S. Attached patch has some abort() for test purposing under load. This
abort()s must be removed if code were accepted.

Original issue reported on code.google.com by dmitry.i...@gmail.com on 3 Jul 2009 at 12:03

Attachments:

GoogleCodeExporter commented 9 years ago
in git format

Original comment by dmitry.i...@gmail.com on 13 Jul 2009 at 7:18

Attachments:

GoogleCodeExporter commented 9 years ago
Your change will cause clients to occasionally block on the pipes, I'm pretty 
sure?

Original comment by dorma...@rydia.net on 29 Aug 2009 at 10:24

GoogleCodeExporter commented 9 years ago
Yes, in case of many simultaneous connections.

Client connections is accepted by the listening thread. Pipe is used only for 
resend
client information to the worker thread. Pipe has a limited capacity (4096 
bytes on
almost all systems; 65536 bytes on Linux 2.6). Write to pipe blocks listening 
thread
only if pipe is full (sizeof(CQ_ITEM) == 20, 204 items not readed by worker 
thread;
or 3276 on Linux 2.6). Current memcached would blocks in case of 4096 not 
readed by
worker thread items.

Pipes can work in non-blocking mode (if O_NONBLOCK is enabled). In this case
memcached may reject connections if write would block.

Original comment by dmitry.i...@gmail.com on 30 Aug 2009 at 5:54

GoogleCodeExporter commented 9 years ago
I'd rather send a pointer to the CQ_ITEM over the pipe instead of the whole 
struct.

That would increase the number of connections that can be on the pipe and reduce
unnecessary copying of data.

Original comment by overfl...@gmail.com on 31 Aug 2009 at 10:24

GoogleCodeExporter commented 9 years ago
Unnecessary copying of 20 bytes of data on the stack is more fast than dynamic 
memory
allocation + mutex locking. Size of sending struct may be decrease to 8 bytes 
(by
using bit fields), if needed.
And this patch does not decrease the number of connections on the pipe. It would
block some new connections (or reject it in non-blocking mode), if client 
connection
hit rate is more than worker thread can read from pipe and insert in libevent 
queue
in this period of time.

Original comment by dmitry.i...@gmail.com on 31 Aug 2009 at 11:43

GoogleCodeExporter commented 9 years ago
I agree with removing the connection queue and the locking.

I was a little concerned about sending 20 bytes through a pipe that can lead to 
odd
things like reading half of the struct from the pipe (if you read without 
locking).
Also, with 20 bytes per connection, only 204 can be sent over the pipe without
locking. With 8 bytes (64 bit pointer), 512 new connections can be on the pipe
waiting to be created.

If all the data can fit in 8 bytes, it's even better than sending a pointer as I
suggested.

Original comment by overfl...@gmail.com on 31 Aug 2009 at 2:27

GoogleCodeExporter commented 9 years ago
Sorry, I misunderstood). 

Yes, partial reading may be a problem. I insert abort() for this case.

Struct is writing atomicaly to the pipe. So libevent would signal to worker 
after
data is fully available for reading (in theory) and the only thing that can 
lead to
partial reading is signal interrupt. This can be fixed by cycling until full 
data
would reading.

Original comment by dmitry.i...@gmail.com on 31 Aug 2009 at 5:04

GoogleCodeExporter commented 9 years ago
If this patch would be interesting for memcached team, I can work on
1) reducing size of new_connection structure; or
2) replacing pipe by socketpair; or
3) making pipe non-blocking and rejecting new connection in dispatcher thread if
workers can't manage current client connection boost.

Original comment by dmitry.i...@gmail.com on 21 Sep 2009 at 8:04

GoogleCodeExporter commented 9 years ago
I don't think there's a bug here.  This looks like something interesting could 
come
out of it, but there are a lot of things to consider in reducing the complexity 
of a
working system.

I wouldn't discourage work towards it, but consider the feedback given here. 
Probably better to have this kind of discussion on the mailing list.

Original comment by dsalli...@gmail.com on 29 Oct 2009 at 6:39