Closed GoogleCodeExporter closed 9 years ago
I have a suspicion that we would be seeing mass segfaults if this were true?
I'm going to close the bug. If you'd like to reopen, please submit a more
detailed bug report, ideally with a runnable test case.
Original comment by dorma...@rydia.net
on 8 Aug 2011 at 5:47
nah, IIRC it's gonna be a memory leak. but who cares in userspace ;)
Original comment by gmi...@gmail.com
on 8 Aug 2011 at 5:54
We do actually care, but I'm too overloaded to go generate tests to prove that
the issue is the way you say it is (for all of these bug reports).
If you feel like it's a genuine problem and wouldn't mind spending 20 minutes
submitting something I can easily use to verify a fix, we would be greatly
appreciative.
Original comment by dorma...@rydia.net
on 8 Aug 2011 at 6:38
sure, will see what i can do... we need to spend some time updating the
fusion-io patch for 1.4.6 anyway.
Original comment by gmi...@gmail.com
on 8 Aug 2011 at 6:54
actually it's not a mem leak.. but an edge case when using CAS and size gets
close to a slab barrier (promoted to next slab class) .. or if close to max
size, it might have some bad behavior (first getting cleared to fit in the
biggest slab, then getting rejected .. or worse losing data?)
in do_item_alloc() CAS is accounted for to determine slab class:
size_t ntotal = item_make_header(nkey + 1, flags, nbytes, suffix, &nsuffix);
if (settings.use_cas) {
ntotal += sizeof(uint64_t);
}
unsigned int id = slabs_clsid(ntotal);
yet in item_size_ok() we don't add those 8 bytes to the picture:
return slabs_clsid(item_make_header(nkey + 1, flags, nbytes,
prefix, &nsuffix)) != 0;
not sure how to generate a test case.. maybe just need to present a KV that is
close in size to the edge of 1MB - 4B ? is CAS normally off (in which case
this is masked..)?
Original comment by gmi...@gmail.com
on 8 Aug 2011 at 7:29
The only "problem" caused by this bug is that the "wrong" error message is
returned back to the client:
First we call:
it = item_alloc(key, nkey, flags, realtime(exptime), vlen);
item_alloc accounts for CAS if it's used.
Then we check for the result, and if it failed we're going to try to check why:
if (it == 0) {
if (! item_size_ok(nkey, flags, vlen))
out_string(c, "SERVER_ERROR object too large for cache");
else
out_string(c, "SERVER_ERROR out of memory storing object");
I wouldn't call this a hight priority bug, but at the same time it isn't hard
to fix ...
Original comment by trond.no...@gmail.com
on 8 Aug 2011 at 1:33
I pushed a fix for that, but I didn't write a unit test for it.. (part of the
problem is caused by different "item size" in a 32 and 64 bit env, but my perl
skills are a bit limited ;-))
Original comment by trond.no...@gmail.com
on 8 Aug 2011 at 1:51
awesome, cheers Trond!
Original comment by gmi...@gmail.com
on 8 Aug 2011 at 5:15
Original issue reported on code.google.com by
gmi...@gmail.com
on 17 Dec 2009 at 8:06