Closed GoogleCodeExporter closed 9 years ago
FYI, This might be fixed by one of the patches we have for serf in
mod_pagespeed (I plan to cleanup and submit them at some point...):
--- src/third_party/serf/src/buckets/allocator.c 2011-05-06 16:22:56.220366814
-0400
+++ src/third_party/serf/src/buckets/allocator.c 2011-05-06 11:08:31.684199000
-0400
@@ -83,6 +83,7 @@
struct serf_bucket_alloc_t {
apr_pool_t *pool;
apr_allocator_t *allocator;
+ int own_allocator;
serf_unfreed_func_t unfreed;
void *unfreed_baton;
@@ -106,6 +107,9 @@
if (allocator->blocks) {
apr_allocator_free(allocator->allocator, allocator->blocks);
}
+ if (allocator->own_allocator == 1) {
+ apr_allocator_destroy(allocator->allocator);
+ }
return APR_SUCCESS;
}
@@ -119,10 +123,12 @@
allocator->pool = pool;
allocator->allocator = apr_pool_allocator_get(pool);
+ allocator->own_allocator = 0;
if (allocator->allocator == NULL) {
/* This most likely means pools are running in debug mode, create our
* own allocator to deal with memory ourselves */
apr_allocator_create(&allocator->allocator);
+ allocator->own_allocator = 1;
}
allocator->unfreed = unfreed;
allocator->unfreed_baton = unfreed_baton;
Original comment by morlov...@google.com
on 8 Jun 2011 at 6:24
My debugging was leading me down the same road -- to the custom handling of the
allocator when pool debugging was enabled. I'll give your patch a try.
Original comment by cmpilato
on 9 Jun 2011 at 7:10
No effect at all. And actually, now that I read the patch, I'm not surprised.
I *think* the SEGFAULT is due to a doubled attempt to cleanup the allocator,
not a missing one. This patch seems to add yet another destruction attempt on
the allocator.
Original comment by cmpilato
on 9 Jun 2011 at 7:18
You are right; I misremembered the patch rationale: it was meant to be a leak
fix.
Please pardon my screw up.
I think the issue may be that the allocator gets cleaned up before the
bucket/requests...
Original comment by morlov...@google.com
on 9 Jun 2011 at 7:37
r1488 fixes a segfault on pool cleanup, but with a slightly different
stacktrace. The actual problem is that the response bucket is first destroyed
when the respool allocator is cleaned up, and then again on cancel_request when
the connection pool is cleaned up (respool is a child of conn->pool so gets
cleaned up first).
This root cause can very well cause the stacktrace in this issue, but on
Windows, in my simulations I always got the one from
http://subversion.tigris.org/issues/show_bug.cgi?id=3917
Closing as "Fixed", let us know of the problems persist.
Thanks for the detailed report!
Original comment by lieven.govaerts@gmail.com
on 16 Jun 2011 at 5:51
I can confirm that this seems to fix the problem.
Original comment by cmpilato
on 16 Jun 2011 at 5:56
By the way, I've filed issue 79 to track the patch previously posted to this
issue (or, my modified version thereof) so it doesn't get lost here.
Original comment by cmpilato
on 21 Jun 2011 at 4:12
Original issue reported on code.google.com by
cmpilato
on 8 Jun 2011 at 6:17