Jdesk / memcached

Automatically exported from code.google.com/p/memcached
0 stars 0 forks source link

Append/Prepend to existing key whose value that grows larger than 1MB and non-existing key returns NOT_STORED error #151

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?

1. append a value to an existing key that can grow larger than 1MB
2. the error returned is NOT_STORED
3. append a value to non-existing key also returns NOT_STORED

What is the expected output? What do you see instead?
We need a way to distinguish them. Otherwise we need get(key) before append()

What version of the product are you using? On what operating system?
1.4.5 on Linux

Please provide any additional information below.

do_store_item() returns 

                new_it = do_item_alloc(key, it->nkey, flags, old_it->exptime, it->nbytes + old_it->nbytes - 2 /* CRLF */);

                if (new_it == NULL) {
                    /* SERVER_ERROR out of memory */
                    if (old_it != NULL)
                        do_item_remove(old_it);

                    return NOT_STORED;
                }

I think "append/prepend" should return the same error as "set/replace/update" 
larger than 1MB blob.

"SERVER_ERROR object too large for cache"

Original issue reported on code.google.com by tru64ufs@gmail.com on 19 Aug 2010 at 12:21

GoogleCodeExporter commented 9 years ago
The problem with this is that other clients may be expecting different 
behavior.  There have been problems in the past with changing responses.  I 
don't know enough to anticipate how it would affect most of the clients.

I think this makes sense to do in 1.6 with an envvar to get the old behavior if 
needed.

In terms of addressing your current needs with 1.4.5, perhaps the client could 
fail back to doing the append on the client to see if it's a memory issue or 
another kind of issue.

Original comment by ingen...@gmail.com on 19 Aug 2010 at 6:11

GoogleCodeExporter commented 9 years ago
I understand this would change client's expectation.

The problem with 1.4.5 is that the client trims to appropriate size on get() 
not on append().

The client simply does not distinguish between non-existing key and existing 
key on append. So the client's operation simply starts with append() regardless 
of its existence.  Doing get() on every append() increase latency and although 
it may be rare, get() does not guarantee its existence (no atomicity across 
multiple operation)

The client only trims to appropriate size  on get().
I think we need to change this behavior on major upgrade due to backward 
compatibility issue.

Thank you.

Original comment by tru64ufs@gmail.com on 19 Aug 2010 at 11:36

GoogleCodeExporter commented 9 years ago
Are you trying to ensure that the key doesn't go over 1MB, or that you append 
against something that already exists?

Memcached removes the item if it fails to append to it, so your best bet is to 
actually run 'add' if append fails for any reason. If the 'add' fails you then 
lost the race for recreating the item and you can decide to either run append 
again (depending on what your app is doing here...) or give up.

I never noticed the issue here since we already use the append/add routine.

Original comment by dorma...@rydia.net on 20 Aug 2010 at 12:50

GoogleCodeExporter commented 9 years ago
Dormando.

Here's what I am trying to do.

Workload: many append/prened(), a few read

1. append against non-existing key (allowed) - error ignored. no need to check 
its existence
     we only create key on DB read operation. 
     We are trying not to create too many keys so there are many append operations to simplify logic

2. But once a key had been created, and unless there is another get() 
operation, we only append.
     since there are so many appends (append against non-existing key and existing keys), we cannot do
    get() on every append() NOT_STORED failure. 
    Again, NOT_STORED failure is ok for non-exsiting key.

3. Values only gets trims on get(). 

The problem with this is that once a key has been created, and workload 
requires lots of append and few get,
when we get NOT_STORED, we do not know if this append() failure was due to 
non-exsiting key or out-of-memory. Since there is no need to check for 
existence when the key does not exist, we cannot add get() after each append() 
failure.  If you do, you are simply adding 1 RTT to each append operation which 
doubles response time. Could be much worse if the value is large (close to 1M)

Memcached does NOT remove the item when it fails to append/prepend with 
out-of-mem error. It simply returns NOT_STORED.  

Thank you, Dormando.

Original comment by tru64ufs@gmail.com on 20 Aug 2010 at 1:40

GoogleCodeExporter commented 9 years ago
Looks like you have the one edge case that's annoyed by this. Thanks for 
explaining it.

If you need to use a workaround, I'd still suggest something like add. You 
throw away an RTT still but don't fetch a lot of data back.

Then you're somewhat stuck with appending to keys you don't want to append 
to... You could mitigate it a bit by having the "add" add a key with a timeout 
of 1s, making it less likely that it'll fill to the brim with data.

Your case does seem a little weird though. If it's failing to append due to 
hitting the item size limit, you just leave it at 1M? You don't trim it or want 
to rewrite it? In most apps I run a failure due to overlarge value or 
nonexistent key are both good reasons to re-add a compacted item.

This kinda makes me want to add a new command... something like "-X" which 
enables protocol fixes, with the caveat that we can add protocol fixes in every 
release. A method to bridge the gap between major releases, perhaps?

Original comment by dorma...@rydia.net on 20 Aug 2010 at 2:09

GoogleCodeExporter commented 9 years ago
Dormando.

Very good question. We are actually prepend(), not append().
That means we are using memcached as a FIFO Q. Only the oldest items need to be 
discarded.
If we leave it at 1M, then we are stuck with old items in the queue.

I will have to play with "add" then.
Thanks a lot.

Original comment by tru64ufs@gmail.com on 20 Aug 2010 at 2:19

GoogleCodeExporter commented 9 years ago
going to close this... not sure why it was left open.

Original comment by dorma...@rydia.net on 15 Apr 2011 at 7:29