bigdata4u / spymemcached

Automatically exported from code.google.com/p/spymemcached
0 stars 0 forks source link

read/write queues are not being cleared incase of faulty server:port, proposed a fix #310

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
What version of the product are you using? On what operating system?

2.10.2 ( but should be the same behavior  in all)

Red Hat Enterprise Linux Server release 5.8 (Tikanga)

Tell me more...

There was an issue, where we used a non existent host:port and created 
spymemcached client. And when we hit the system with urls (both the reads and 
writes), the internal read and write blocking queues were grown steadily and at 
one point the whole memory was used up causing system to go down.

For our use case ( when there is a time out for any operation we can clear the 
queues), I added a patch to 2.10.2, where if there is a time out we clear the 
queues. Below is the modified code.

Can you please have a look and advice.

net/spy/memcached/protocol/TCPMemcachedNodeImpl.java

  public final Operation removeReadOp(Operation op) {
    readQ.remove(op);
    return op; 
  }
  public final Operation removeWriteOp(Operation op) {
    writeQ.remove(op);
    return op; 
  }

Invoked from:
net/spy/memcached/internal/OperationFuture.java

 public T get(long duration, TimeUnit units) throws InterruptedException,
      TimeoutException, ExecutionException {

    .....

     if(op.getState() == OperationState.READING){
          op.getHandlingNode().removeReadOp(op);
      }   
      else if(op.getState() == OperationState.WRITING || op.getState() == OperationState.WRITE_QUEUED){
          op.getHandlingNode().removeWriteOp(op);
      }   

     .....
      throw new CheckedOperationTimeoutException(
          "Timed out waiting for operation", op);

    Attached is the source jar file.

Thanks,
Siva

Original issue reported on code.google.com by sivabas...@gmail.com on 16 Sep 2014 at 7:51

Attachments: