Closed GoogleCodeExporter closed 9 years ago
Forgot to add some details:
openSUSE 11.2 - 64Bit
proxy:~ # uname -a
Linux proxy 2.6.31.12-0.2-default #1 SMP 2010-03-16 21:25:39 +0100 x86_64
x86_64 x86_64 GNU/Linux
proxy:~ # squid -v
Squid Cache: Version LUSCA_HEAD-r14535
Original comment by renato.o...@gmail.com
on 22 Jun 2010 at 1:09
It's a long shot, but this may be related to a bug that I recently posted on
the Squid bugzilla system (seemingly caused by HTTP requests that do not
contain a path element):
* http://bugs.squid-cache.org/show_bug.cgi?id=2973
Check your logs for TCP_DENIED messages and see if they correspond to the
sudden increase in Squid / Lusca memory use.
I'd be interested to know if it is the same problem.
Original comment by arew...@googlemail.com
on 5 Jul 2010 at 6:02
Thankyou for bringing this to my attention. I've fixed this in r14723. Please
try it and let me know!
Original comment by adrian.c...@gmail.com
on 8 Jul 2010 at 7:22
Original comment by adrian.c...@gmail.com
on 8 Jul 2010 at 7:22
renato, have you verified the latest LUSCA_HEAD fixes the memory leak?
Original comment by adrian.c...@gmail.com
on 8 Aug 2010 at 1:58
[deleted comment]
Adrian,
Sorry for the slow response. In the following version it's still leaking. Only
the code changed from 1013 to 1014...
proxy:~ # squid -v
Squid Cache: Version LUSCA_HEAD-r14733
configure options: '--prefix=/usr' '--sysconfdir=/etc/squid'
'--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--localstatedir=/var'
'--libexecdir=/usr/sbin' '--datadir=/usr/share/squid' '--libdir=/usr/lib'
'--enable-largefiles' '--with-maxfd=65536' '--with-default-user=squid'
'--enable-storeio=aufs' '--enable-disk-io=AIO,Blocking,DiskDaemon,DiskThreads'
'--enable-removal-policies=heap,lru' '--enable-icmp' '--enable-linux-tproxy4'
'--enable-snmp'
proxy:~ # squidclient mgr:mem
HTTP/1.0 200 OK
Server: Lusca/LUSCA_HEAD-r14733
Date: Fri, 20 Aug 2010 18:51:46 GMT
Content-Type: text/plain
Expires: Fri, 20 Aug 2010 18:51:46 GMT
X-Cache: MISS from proxy.itake.com.br
Via: 1.0 proxy.itake.com.br:3129 (Lusca/LUSCA_HEAD-r14733)
Connection: close
Current memory usage:
Pool Obj Size Allocated In Use Hit Rate
(bytes) (#) (KB) high (KB) high (hrs) impact (%total) (#) (KB) high (KB) (%num) (number)
2K Buffer (no-zero) 2048 2 4 1824 44.38 0 2 4 1824 100
1363980963
4K Buffer (no-zero) 4096 12787 51148 91952 187.85 1 12787
51148 91952 100 206672603
8K Buffer (no-zero) 8192 67 536 1480 44.38 0 67 536 1480
100 96770213
16K Buffer (no-zero) 16384 1 16 288 92.91 0 1 16 288 100
36019
32K Buffer (no-zero) 32768 1 32 96 262.93 0 1 32 96 100
271
64K Buffer (no-zero) 65536 0 0 64 196.91 0 0 0 64 -1 2
Short Strings (no-zero) 36 2379127 83642 85951 1.27 1 2379127
83642 85951 100 17090917365
Medium Strings (no-zero) 128 386717 48340 48484 0.27 1 386717
48340 48484 100 2796019360
Long Strings (no-zero) 512 137155 68578 68947 1.20 1 137155
68578 68947 100 559146338
event 48 9 1 3 100.96 0 9 1 3 100 26854698
close_handler 24 15895 373 663 187.84 0 15895 373 663 100
840799863
acl 64 11 1 1 269.28 0 11 1 1 100 11
acl_ip_data 24 14 1 1 269.28 0 14 1 1 100 14
acl_list 24 21 1 1 269.28 0 21 1 1 100 21
dwrite_q 48 0 0 1 269.28 0 0 0 1 -1 116084250
FwdServer 24 1580 38 99 100.96 0 1580 38 99 100
264002569
HttpReply 168 19673 3228 4688 171.92 0 19673 3228 4688
100 818677672
mem_node (no-zero) 4112 65669 263703 267509 91.90 4 65669
263703 267509 100 1919711465
StoreEntry 88 1229505 105661 129365 267.76 2 1229505
105661 129365 100 309494647
MemObject 272 18678 4962 7313 171.93 0 18678 4962 7313
100 422654029
netdbEntry 104 963 98 102 269.13 0 963 98 102 100
2944683
net_db_name 32 27208 851 1066 112.36 0 27208 851 1066
100 4706026
request_t 1360 313885 416879 416964 0.01 7 313885 416879
416964 100 391100141
ClientInfo 352 3120 1073 1091 67.45 0 3120 1073 1091
100 24588
storeSwapLogData 72 0 0 1 269.28 0 0 0 1 -1 116084250
buf_t 80 0 0 1 2.48 0 0 0 1 -1 1357228015
AUFS IO State data 48 321 16 40 260.39 0 321 16 40 100
181397983
AUFS Queued read data 64 1 1 6 19.63 0 1 1 6 100 124502703
AUFS Queued write data 56 0 0 146 115.10 0 0 0 146 -1 280635361
aio_ctrl 104 1 1 41 44.38 0 1 1 41 100 1160745460
wordlist 16 8 1 1 269.28 0 8 1 1 100 11
cbdata acl_address (1001) 48 1 1 1 269.28 0 1 1 1 100 1
intlist 16 1 1 1 269.28 0 1 1 1 100 1
cbdata acl_access (1002) 56 17 1 1 269.28 0 17 1 1 100 17
cbdata http_port_list (1003) 136 3 1 1 269.28 0 3 1 1 100 3
LRU policy node 24 1231857 28872 35634 197.72 0 1231857 28872
35634 100 140936958
cbdata RemovalPolicy (1004) 104 2 1 1 269.28 0 2 1 1 100 2
cbdata body_size (1005) 64 3 1 1 269.28 0 3 1 1 100 3
ipcache_entry 128 1124 141 199 198.59 0 1124 141 199 100
13854896
fqdncache_entry 160 3 1 1 269.28 0 3 1 1 100 6
cbdata idns_query (1006) 8680 0 0 9715 198.59 0 0 0 9715 -1
13854893
HttpHeaderEntry 40 2512704 98153 100169 1.19 2 2512704 98153
100169 100 14693352533
HttpHdrRangeSpec 16 9 1 5 49.05 0 9 1 5 100 15867412
HttpHdrRange 16 9 1 3 48.95 0 9 1 3 100 15387325
HttpHdrContRange 24 96 3 9 261.15 0 96 3 9 100 27848871
HttpHdrCc 40 49500 1934 2196 4.07 0 49500 1934 2196
100 843478615
MD5 digest 16 1229505 19212 23521 267.76 0 1229505 19212
23521 100 402241344
aio_thread 40 16 1 1 269.28 0 16 1 1 100 16
aio_request 96 1 1 38 44.38 0 1 1 38 100 1160745460
cbdata RebuildState (1010) 112 0 0 1 269.28 0 0 0 1 -1 1
pconn_data 32 3001 94 173 43.50 0 3001 94 173 100
115690712
pconn_fds 32 2998 94 173 43.50 0 2998 94 173 100
115690712
cbdata ConnStateData (1012) 336 12775 4192 7504 187.85 0 12775 4192
7504 100 166136683
cbdata RemovalPurgeWalker (1013) 72 0 0 1 269.28 0 0 0 1 -1
1842519
cbdata clientHttpRequest (1014) 1136 4536862 5033082 5033132 0.00 80
4536862 5033082 5033132 100 391098464
cbdata aclCheck_t (1015) 352 1 1 1 269.16 0 1 1 1 100 1566905793
cbdata ErrorState (1016) 160 157023 24535 24536 0.00 0 157023 24535
24536 100 8983102
cbdata store_client (1017) 152 1791 266 649 100.96 0 1791 266 649
100 473871420
cbdata storeIOState (1018) 136 321 43 112 260.39 0 321 43 112 100
181397983
cbdata FwdState (1019) 112 1580 173 460 100.96 0 1580 173 460 100
263780534
cbdata ps_state (1020) 200 0 0 1 269.16 0 0 0 1 -1 264001127
cbdata ConnectStateData (1021) 96 215 21 296 100.96 0 215 21 296
100 144145152
cbdata generic_cbdata (1022) 32 208 7 42 198.59 0 208 7 42 100
216378428
cbdata HttpStateData (1023) 136 157355 20899 20907 0.04 0 157355
20899 20907 100 263574171
cbdata LocateVaryState (1024) 144 0 0 2 2.27 0 0 0 2 -1 14017539
VaryData 32 412 13 25 22.34 0 412 13 25 100 14017539
cbdata AddVaryState (1025) 160 0 0 2 52.71 0 0 0 2 -1 5341040
cbdata SslStateData (1026) 120 0 0 3 188.58 0 0 0 3 -1 220593
cbdata Logfile (1027) 4192 0 0 5 268.62 0 0 0 5 -1 1
cbdata clientAsyncRefreshRequest (1028) 88 0 0 1 266.52 0 0 0 1
-1 22
cbdata RemovalPolicyWalker (1029) 56 0 0 1 250.86 0 0 0 1 -1 11
Total 14511812 6280906 6290533 0.05 100 14511812
6280906 6290533 100 51985853497
Cumulative allocated volume: 16.87 TB
Current overhead: 13748 bytes (0.000%)
Idle pool limit: 5.00 MB
memPoolAlloc calls: 446245945
memPoolFree calls: 431734132
String Pool Impact
(%strings) (%volume)
Short Strings 81 36
Medium Strings 13 21
Long Strings 5 29
Other Strings 1 14
Large buffers: 0 (0 KB)
Original comment by renato.o...@gmail.com
on 20 Aug 2010 at 7:34
Do you think release 14756 address the issue?
Original comment by renato.o...@gmail.com
on 20 Aug 2010 at 7:37
Hm! You could try it. I doubt it though.
Original comment by adrian.c...@gmail.com
on 21 Aug 2010 at 1:37
I'll run it for a few days and let you know if the memory usage keeps growing
without limits.
Original comment by renato.o...@gmail.com
on 21 Aug 2010 at 3:25
Same memory usage with the latest release:
proxy:~ # squidclient mgr:mem
HTTP/1.0 200 OK
Server: Lusca/LUSCA_HEAD-r14756
Date: Tue, 24 Aug 2010 00:17:52 GMT
Content-Type: text/plain
Expires: Tue, 24 Aug 2010 00:17:52 GMT
Connection: close
Current memory usage:
Pool Obj Size Allocated In Use Hit Rate
(bytes) (#) (KB) high (KB) high (hrs) impact (%total) (#) (KB) high (KB) (%num) (number)
2K Buffer (no-zero) 2048 1 2 4928 1.35 0 1 2 4928 100
332442356
4K Buffer (no-zero) 4096 2231 8924 98908 0.83 0 2231 8924
98908 100 51582727
8K Buffer (no-zero) 8192 39 312 6248 0.77 0 39 312 6248
100 23167970
16K Buffer (no-zero) 16384 0 0 240 33.30 0 0 0 240 -1 9912
32K Buffer (no-zero) 32768 0 0 64 75.86 0 0 0 64 -1 23
64K Buffer (no-zero) 65536 0 0 64 62.62 0 0 0 64 -1 2
Short Strings (no-zero) 36 783576 27548 31150 1.35 1 783576
27548 31150 100 4169421974
Medium Strings (no-zero) 128 108355 13545 15279 1.35 1 108355
13545 15279 100 681282320
Long Strings (no-zero) 512 39791 19896 22207 0.84 1 39791
19896 22207 100 135124259
event 48 9 1 44 1.35 0 9 1 44 100 6156643
close_handler 24 5365 126 900 0.84 0 5365 126 900 100
211224143
acl 64 11 1 1 77.22 0 11 1 1 100 11
acl_ip_data 24 14 1 1 77.22 0 14 1 1 100 14
acl_list 24 21 1 1 77.22 0 21 1 1 100 21
dwrite_q 48 0 0 1 77.22 0 0 0 1 -1 38056236
FwdServer 24 1632 39 177 1.35 0 1632 39 177 100
65779756
HttpReply 168 23565 3867 5673 1.35 0 23565 3867 5673
100 198694295
mem_node (no-zero) 4112 64433 258739 275613 1.19 13 64433
258739 275613 100 458617517
StoreEntry 88 1357472 116658 129847 5.81 6 1357472
116658 129847 100 78654572
MemObject 272 22563 5994 8502 1.35 0 22563 5994 8502
100 102372890
netdbEntry 104 951 97 102 77.19 0 951 97 102 100 694653
net_db_name 32 26148 818 841 44.86 0 26148 818 841 100
1121893
request_t 1360 75816 100694 106448 0.84 5 75816 100694
106448 100 95012705
ClientInfo 352 3151 1084 1090 1.01 0 3151 1084 1090 100
11066
storeSwapLogData 72 0 0 1 77.22 0 0 0 1 -1 38056236
buf_t 80 0 0 1 25.69 0 0 0 1 -1 245118380
AUFS IO State data 48 424 20 36 24.03 0 424 20 36 100 44921640
AUFS Queued read data 64 0 0 6 9.00 0 0 0 6 -1 28561271
AUFS Queued write data 56 0 0 128 26.04 0 0 0 128 -1 81410752
aio_ctrl 104 0 0 49 1.22 0 0 0 49 -1 290157214
wordlist 16 8 1 1 77.22 0 8 1 1 100 11
cbdata acl_address (1001) 48 1 1 1 77.22 0 1 1 1 100 1
intlist 16 1 1 1 77.22 0 1 1 1 100 1
cbdata acl_access (1002) 56 17 1 1 77.22 0 17 1 1 100 17
cbdata http_port_list (1003) 136 3 1 1 77.22 0 3 1 1 100 3
LRU policy node 24 1372926 32178 35777 5.86 2 1372926 32178
35777 100 40369990
cbdata RemovalPolicy (1004) 104 2 1 1 77.22 0 2 1 1 100 2
cbdata body_size (1005) 64 3 1 1 77.22 0 3 1 1 100 3
ipcache_entry 128 920 115 1240 1.35 0 920 115 1240 100
4044254
fqdncache_entry 160 3 1 1 77.22 0 3 1 1 100 6
cbdata idns_query (1006) 8680 0 0 80087 1.35 0 0 0 80087 -1 4044251
HttpHeaderEntry 40 799734 31240 35372 1.35 2 799734 31240
35372 100 3587427031
HttpHdrRangeSpec 16 14 1 4 6.94 0 14 1 4 100 3216791
HttpHdrRange 16 14 1 2 0.79 0 14 1 2 100 3127758
HttpHdrContRange 24 135 4 8 0.78 0 135 4 8 100 5787568
HttpHdrCc 40 28394 1110 1314 1.35 0 28394 1110 1314
100 208000280
MD5 digest 16 1357472 21211 23609 5.81 1 1357472 21211
23609 100 104509435
aio_thread 40 16 1 1 77.22 0 16 1 1 100 16
aio_request 96 0 0 45 1.22 0 0 0 45 -1 290157214
cbdata RebuildState (1010) 112 0 0 1 77.22 0 0 0 1 -1 1
pconn_data 32 0 0 173 24.01 0 0 0 173 -1 28697852
pconn_fds 32 0 0 172 24.01 0 0 0 172 -1 28697852
cbdata ConnStateData (1012) 336 2101 690 7990 0.84 0 2101 690 7990
100 41452858
cbdata clientHttpRequest (1013) 1144 1183414 1322096 1323136 0.07 67
1183414 1322096 1323136 100 95012242
cbdata aclCheck_t (1014) 352 1 1 1 77.22 0 1 1 1 100 382344428
cbdata store_client (1015) 152 1975 294 1294 1.35 0 1975 294 1294
100 114552408
cbdata FwdState (1016) 112 1632 179 824 1.35 0 1632 179 824 100
65778585
cbdata ps_state (1017) 200 0 0 1 77.22 0 0 0 1 -1 65779586
cbdata ConnectStateData (1018) 96 0 0 912 1.35 0 0 0 912 -1
37485292
cbdata generic_cbdata (1019) 32 312 10 304 1.35 0 312 10 304 100
38421796
cbdata ErrorState (1020) 160 40306 6298 6622 1.35 0 40306 6298
6622 100 2499117
cbdata HttpStateData (1021) 136 35815 4757 5008 0.83 0 35815 4757
5008 100 65662447
cbdata storeIOState (1022) 136 424 57 101 24.03 0 424 57 101 100
44921640
cbdata AddVaryState (1023) 160 0 0 5 1.73 0 0 0 5 -1 1289754
cbdata LocateVaryState (1024) 144 0 0 4 8.38 0 0 0 4 -1 3281690
VaryData 32 120 4 6 1.29 0 120 4 6 100 3281690
cbdata RemovalPurgeWalker (1025) 72 0 0 1 77.18 0 0 0 1 -1 523706
cbdata SslStateData (1026) 120 0 0 2 30.15 0 0 0 2 -1 1001
cbdata Logfile (1027) 4192 0 0 5 76.55 0 0 0 5 -1 1
cbdata clientAsyncRefreshRequest (1028) 88 0 0 1 75.42 0 0 0 1
-1 9
cbdata RemovalPolicyWalker (1029) 56 0 0 1 64.30 0 0 0 1 -1 3
Total 7341331 1978602 2157721 0.84 100 7341331 1978602
2157721 100 12648022042
Cumulative allocated volume: 4.08 TB
Current overhead: 13748 bytes (0.001%)
Idle pool limit: 5.00 MB
memPoolAlloc calls: -236879846
memPoolFree calls: -244221178
String Pool Impact
(%strings) (%volume)
Short Strings 83 37
Medium Strings 11 18
Long Strings 4 27
Other Strings 1 17
Large buffers: 0 (0 KB)
Original comment by renato.o...@gmail.com
on 24 Aug 2010 at 12:58
Can you please try this:
* start up lusca
* pass traffic through it; see the usage blow out
* then without shutting lusca down, stop passing new connections to it and let
it time out the current connections
* -then- compare memory usage
Thanks,
Adrian
Original comment by adrian.c...@gmail.com
on 24 Aug 2010 at 1:20
I'm going to make the hashed cbdata and cbdata debugging code work again so we
can get a list of what's locked/unlocked each of those objects. I have a
feeling something's just being done subtly wrong in your particular use case. I
certainly don't see memory leaks on my currently live proxies (or, honestly, I
haven't yet noticed them.)
Original comment by adrian.c...@gmail.com
on 24 Aug 2010 at 1:59
Try this patch against LUSCA_HEAD, and compile with:
env CFLAGS="-g -O -DHASHED_CBDATA -DCBDATA_DEBUG" ./configure ...
Then check "cbdata" cachemgr page.
Please note this will make your proxy run quite a bit slower, so be careful.
Original comment by adrian.c...@gmail.com
on 24 Aug 2010 at 2:38
Attachments:
I've patched a system which exhibits here, and this is an example that I've
seen for one of the leaking proxies:
2010/08/24 18:39:32| cbdataAlloc: 0x80a449900 client_side_request_parse.c:377
parseHttpRequest() 0
2010/08/24 18:39:32| cbdataLock: 0x80a449900: acl.c:2508
aclNBCheck() 1
2010/08/24 18:39:32| cbdataValid: 0x80a449900
2010/08/24 18:39:32| cbdataLock: 0x80a449900: acl.c:2508
aclNBCheck() 2
2010/08/24 18:39:32| cbdataValid: 0x80a449900
2010/08/24 18:39:32| cbdataLock: 0x80a449900: comm.c:1485
comm_write() 3
2010/08/24 18:39:32| cbdataUnlock: 0x80a449900 acl.c:2355
aclCheckCallback() 2
2010/08/24 18:39:32| cbdataUnlock: 0x80a449900 acl.c:2355
aclCheckCallback() 1
2010/08/24 18:39:32| cbdataValid: 0x80a449900
2010/08/24 18:39:32| cbdataLock: 0x80a449900: client_side_body.c:16
clientEatRequestBodyHandler() 2
2010/08/24 18:39:32| cbdataUnlock: 0x80a449900 comm.c:144
commWriteStateCallbackAndFree() 1
2010/08/24 18:39:32| cbdataValid: 0x80a449900
2010/08/24 18:39:32| cbdataUnlock: 0x80a449900 client_side_body.c:90
clientProcessBody() 0
2010/08/24 18:39:32| cbdataLock: 0x80a449900: client_side_body.c:16
clientEatRequestBodyHandler() 1
2010/08/24 18:39:32| cbdataValid: 0x80a449900
2010/08/24 18:39:32| cbdataUnlock: 0x80a449900 client_side_body.c:90
clientProcessBody() 0
2010/08/24 18:39:32| cbdataLock: 0x80a449900: client_side_body.c:16
clientEatRequestBodyHandler() 1
2010/08/24 18:39:32| cbdataFree: 0x80a449900
2010/08/24 18:39:32| cbdataFree: 0x80a449900 has 1 locks, not freeing
1! aiee!
Original comment by adrian.c...@gmail.com
on 24 Aug 2010 at 3:00
.. so the point here is, something enters the clientEatRequestBody() path via
client_side.c and it results in that locked up situation.
I've been staring at the code and it seems like clientEatRequestBody() calls
clientEatRequestBodyHandler(), which is locking the clientHttpRequest (ie,
'http'), creating a blank buffer, setting the body callback to itself, and
calling clientProcessBody(). clientProcessBody() then eats some data, calls
cbdataUnlock() above, then calls the callback - which is
clientEatRequestBodyHandler() again.
THe thing is, that last clientEatRequestBodyHandler() should've found the http
pointer fine; but I wonder if conn->in.offset is 0 at this point (ie, there's
no further data in the incoming socket buffer to read) and it's thus not
getting a chance to call the callback to indicate as much.
This code doesn't look like it's changed in a while. I wonder if I can isolate
the specific case where this is happening - but I do wonder whether it's been a
problem with Squid-2.x and it's something new/unique for some site that's
triggering the leak more often.
A close inspection of this code makes me more unhappy. line 56 in
client_side_body.c (inside clientProcessBody()) makes me think there's another
leak there - if the data isn't valid, comm_close() (and the comm read handlers)
aren't going to undo this - eg, it won't be undone by the client http free
path. So in that case, a clientHttpRequest ref would also leak.
Anyway. More to come tomorrow.
Original comment by adrian.c...@gmail.com
on 24 Aug 2010 at 3:12
Adrian,
i'm running 3 lusca's righy now. 2 of them leak and both are 64-bit.
I'll try to run on the busiest cache (about 850 req/s) to see if we can track
the request that leaks.
Original comment by renato.o...@gmail.com
on 24 Aug 2010 at 3:46
Try this. Then check cache.log. Oh, and see if Lusca crashes, leaks memory, or
behaves correctly. :)
Index: client_side_body.c
===================================================================
--- client_side_body.c (revision 14762)
+++ client_side_body.c (working copy)
@@ -50,6 +50,14 @@
request_t *request = conn->body.request;
/* Note: request is null while eating "aborted" transfers */
debug(33, 2) ("clientProcessBody: start fd=%d body_size=%lu in.offset=%ld cb=%p req=%p\n", conn->fd, (unsigned long int) conn->body.size_left, (long int) conn->in.offset, callback, request);
+ if (conn->in.offset == 0) {
+ /* This typically will only occur when some recursive call through the
body eating path has occured -adrian */
+ /* XXX so no need atm to call the callback handler; the original code
didn't! -adrian */
+ debug(33, 1) ("clientProcessBody: cbdata %p: would've leaked;
conn->in.offset=0 here\n", cbdata);
+ cbdataUnlock(conn->body.cbdata);
+ conn->body.cbdata = conn->body.callback = NULL;
+ return;
+ }
if (conn->in.offset) {
int valid = cbdataValid(conn->body.cbdata);
if (!valid) {
Original comment by adrian.c...@gmail.com
on 25 Aug 2010 at 12:29
It looks like it's caused by an aborted request body (ie, POST) which isn't
correctly being "eaten".
I wonder if I can craft a test case that reproduces it - I'd like to see
whether Squid-2.x / Squid-3 has the same issue.
Original comment by adrian.c...@gmail.com
on 25 Aug 2010 at 1:13
Please try the following patch against -HEAD.
Original comment by adrian.c...@gmail.com
on 25 Aug 2010 at 7:55
Attachments:
Should i try with debug flags?
Original comment by renato.o...@gmail.com
on 25 Aug 2010 at 3:55
No need. Just apply post-diff.1.diff to the latest lusca-head and run. You
don't need the cbdata debugging.
Original comment by adrian.c...@gmail.com
on 26 Aug 2010 at 3:26
Ok..
Running for about 12h and no memory increase so far..
I'll keep you updated.
Original comment by renato.o...@gmail.com
on 26 Aug 2010 at 7:35
Adrian,
The caches are fine! Thank you very much..
I get the following warnings on the log file:
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 5716: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 9309: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 13580: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 26855: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 6963: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 4904: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 8925: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 25527: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 3723: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 20651: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 33182: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 5155: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 16487: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 8485: no
more data left in socket; but request header says there should be; aborting for
now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 2438: no
more data left in socket; but request header says there should be; aborting for
now
Regards,
Renato
Original comment by renato.o...@gmail.com
on 29 Aug 2010 at 11:18
Ok. So it does look like the bug is in eating request bodies from aborted
requests.
I need to drill down and make sure I'm handling all the use cases fine so FDs
are actually properly handled (both in the keepalive case and the
close-connection case.)
Would you mind trialling out another patch or two, whilst I slowly figure out
what is actually going down?
Thanks,
Adrian
Original comment by adrian.c...@gmail.com
on 30 Aug 2010 at 12:35
Adrian,
Just send me the patch!
Original comment by renato.o...@gmail.com
on 30 Aug 2010 at 12:57
Committed in r14805. Please test!
Original comment by adrian.c...@gmail.com
on 19 Oct 2010 at 1:55
renato,
I downloaded the latest snapshot 14805 and compiled it. It STILL leaks memory
and yes I see the same "clientEatRequestBodyHandler: FD 20651: no more data
left in socket; but request header says there should be; aborting for now"
messages in cache.log If you were able to fix it, could you please advise me as
to how.
Thanks,
Maher
Original comment by maher.ka...@gmail.com
on 21 Oct 2010 at 11:47
After the fix, I see lots of the "clientEatRequestBodyHandler..." messages, but
the memory doesn't increase any more.
My biggest cache is running since:
Start Time: Wed, 25 Aug 2010 15:14:05 GMT
Current Time: Thu, 21 Oct 2010 11:14:26 GMT
I didn't try this new release, only the HEAD (from late august) + the patch on
this page
Can you provide your config file/cachemgr mem usage stats?
Original comment by renato.o...@gmail.com
on 21 Oct 2010 at 12:12
[deleted comment]
try to run:
squidclient mgr:mem
on the squid machine
Original comment by renato.o...@gmail.com
on 21 Oct 2010 at 2:25
It's still leaking clientHttpRequest structs?
Original comment by adrian.c...@gmail.com
on 21 Oct 2010 at 2:30
HTTP/1.0 200 OK
Server: Lusca/LUSCA_HEAD
Date: Thu, 21 Oct 2010 14:35:41 GMT
Content-Type: text/plain
Expires: Thu, 21 Oct 2010 14:35:41 GMT
X-Cache: MISS from Lusca-Cache
X-Cache-Lookup: MISS from Lusca-Cache:3128
Connection: close
Current memory usage:
Pool Obj Size Allocated In Use
Hit Rate
(bytes) (#) (KB) high (KB) high (hrs) impact (%total) (#) (KB) high (KB) (%num) (number)
2K Buffer (no-zero) 2048 582 1164 1684 0.10 0
582 1164 1684 100 7887767
4K Buffer (no-zero) 4096 3377 13508 15016 0.09 0
3377 13508 15016 100 895207
8K Buffer (no-zero) 8192 331 2648 2800 0.10 0
331 2648 2800 100 360086
16K Buffer (no-zero) 16384 0 0 64 1.99 0
0 0 64 -1 51
Short Strings (no-zero) 36 3702012 130149 130153 0.00
2 3702012 130149 130153 100 73255550
Medium Strings (no-zero) 128 79846 9981 9989 0.00 0
79846 9981 9989 100 6284515
Long Strings (no-zero) 512 20423 10212 10251 0.00 0
20423 10212 10251 100 2975204
event 48 11 1 2 0.10 0 11
1 2 100 145929
close_handler 24 5232 123 133 0.09 0 5232
123 133 100 3113882
acl 64 15 1 1 2.11 0 15
1 1 100 15
acl_ip_data 24 8 1 1 2.11 0 8
1 1 100 8
acl_list 24 31 1 1 2.11 0 31
1 1 100 31
relist 80 5 1 1 2.11 0 5
1 1 100 5
CacheDigest 32 1 1 1 2.11 0 1
1 1 100 1
dwrite_q 48 0 0 1 2.11 0 0
0 1 -1 6379542
FwdServer 24 768 18 22 0.10 0 768
18 22 100 1002487
HttpReply 168 393853 64617 64618 0.00 1 393853
64617 64618 100 3584638
mem_node (no-zero) 4112 1153338 4631373
4631373 0.00 81 1153338 4631373 4631373
100 6583568
StoreEntry 88 6327395 543761 543762 0.00 9
6327395 543761 543762 100 7141185
MemObject 272 393605 104552 104554 0.00 2 393605
104552 104554 100 1807330
request_t 1384 1479 1999 2674 0.10 0 1479
1999 2674 100 1869516
helper_request 64 0 0 2 0.10 0 0
0 2 -1 228897
ClientInfo 352 18 7 7 0.00 0 18
7 7 100 18
storeSwapLogData 72 0 0 1 2.11 0 0
0 1 -1 6379542
buf_t 80 0 0 1 0.69 0 0
0 1 -1 6152878
AUFS IO State data 48 112 6 15 2.06 0 112
6 15 100 1052642
AUFS Queued read data 64 0 0 7 2.06 0 0
0 7 -1 651062
AUFS Queued write data 56 0 0 247 2.07 0
0 0 247 -1 1586093
aio_ctrl 104 0 0 43 2.07 0 0
0 43 -1 7237523
wordlist 16 11 1 1 2.11 0 11
1 1 100 14
cbdata http_port_list (1001) 136 1 1 1 2.11 0
1 1 1 100 1
cbdata acl_access (1002) 56 28 2 2 2.11 0
28 2 2 100 28
cbdata RemovalPolicy (1003) 104 4 1 1 2.11 0
4 1 1 100 4
intlist 16 1 1 1 2.11 0 1
1 1 100 1
cbdata body_size (1004) 64 3 1 1 2.11 0
3 1 1 100 3
ipcache_entry 128 15858 1983 1983 0.00 0 15858
1983 1983 100 17168
fqdncache_entry 160 3 1 1 2.11 0 3
1 1 100 3
cbdata idns_query (1005) 8680 0 0 501 0.84 0
0 0 501 -1 17165
cbdata helper (1006) 136 2 1 1 2.11 0 2
1 1 100 2
cbdata helper_server (1007) 152 300 45 45 2.11 0
300 45 45 100 300
cbdata redirectStateData (1008) 72 0 0 2 0.10
0 0 0 2 -1 107792
cbdata storeurlStateData (1009) 72 0 0 1 1.82
0 0 0 1 -1 121105
HttpHeaderEntry 40 3075421 120134 120139 0.00 2
3075421 120134 120139 100 60327902
HttpHdrRangeSpec 16 4 1 1 0.32 0 4
1 1 100 107994
HttpHdrRange 16 4 1 1 0.32 0 4
1 1 100 107479
HttpHdrContRange 24 89 3 3 0.21 0 89
3 3 100 206139
HttpHdrCc 40 228858 8940 8940 0.00 0 228858
8940 8940 100 3311669
cbdata Logfile (1012) 4192 1 5 5 2.11 0 1
5 5 100 1
MD5 digest 16 6327395 98866 98866 0.00 2
6327395 98866 98866 100 7614332
aio_thread 40 32 2 2 2.11 0 32
2 2 100 32
aio_request 96 0 0 40 2.07 0 0
0 40 -1 7237523
cbdata RebuildState (1014) 112 0 0 1 2.11 0
0 0 1 -1 3
pconn_data 32 359 12 15 0.59 0 359
12 15 100 296768
pconn_fds 32 354 12 15 0.59 0 354
12 15 100 296768
cbdata generic_cbdata (1016) 32 72 3 6 0.15 0
72 3 6 100 433630
cbdata RemovalPurgeWalker (1017) 72 0 0 1 2.10
0 0 0 1 -1 22677
cbdata ConnStateData (1018) 336 3396 1115 1238 0.09 0
3396 1115 1238 100 730321
cbdata clientHttpRequest (1019) 1152 2566 2887 3333 0.10
0 2566 2887 3333 100 1787808
cbdata aclCheck_t (1020) 352 3 2 3 2.09 0
3 2 3 100 12319722
cbdata store_client (1021) 152 896 133 208 0.10 0
896 133 208 100 2224074
cbdata storeIOState (1022) 136 112 15 43 2.06 0
112 15 43 100 1052642
cbdata FwdState (1023) 112 768 84 102 0.10 0
768 84 102 100 1002487
cbdata ps_state (1024) 200 0 0 1 2.10 0
0 0 1 -1 1002487
cbdata ConnectStateData (1025) 96 71 7 16 0.10
0 71 7 16 100 356608
cbdata HttpStateData (1026) 136 1045 139 151 0.10 0
1045 139 151 100 1002339
cbdata ErrorState (1027) 160 282 45 46 0.09 0
282 45 46 100 35267
cbdata AddVaryState (1028) 160 0 0 3 2.08 0
0 0 3 -1 25183
cbdata LocateVaryState (1029) 144 0 0 3 2.10 0
0 0 3 -1 93919
VaryData 32 2 1 1 2.10 0 2
1 1 100 93919
Total 21740383 5748545 5748766
0.00 100 21740383 5748545 5748766 100
248528452
Cumulative allocated volume: 72.63 GB
Current overhead: 14153 bytes (0.000%)
Idle pool limit: 0.00 MB
memPoolAlloc calls: 248528452
memPoolFree calls: 226788068
String Pool Impact
(%strings) (%volume)
Short Strings 97 86
Medium Strings 2 7
Long Strings 1 7
Other Strings 0 1
Large buffers: 0 (0 KB)
Original comment by maher.ka...@gmail.com
on 21 Oct 2010 at 2:36
Adrian,
in 14809 apparently yes???
Original comment by maher.ka...@gmail.com
on 21 Oct 2010 at 2:37
mem_node is consuming 81% of your memory.
it's not the same problem I had.
Original comment by renato.o...@gmail.com
on 21 Oct 2010 at 2:47
mem_node is consuming 81% of your memory.
it's not the same problem I had.
Original comment by renato.o...@gmail.com
on 21 Oct 2010 at 2:47
mem_node is consuming 81% of your memory.
it's not the same problem I had.
Original comment by renato.o...@gmail.com
on 21 Oct 2010 at 2:47
mem_node is consuming 81% of your memory.
it's not the same problem I had.
Original comment by renato.o...@gmail.com
on 21 Oct 2010 at 2:48
mem_node is consuming 81% of your memory.
it isn't the same problem I had.
Original comment by renato.o...@gmail.com
on 21 Oct 2010 at 2:48
mem_node is consuming 81 percent of your memory.
it isn't the same problem I had.
Original comment by renato.o...@gmail.com
on 21 Oct 2010 at 2:48
renato, this is running 14371, not 14809.... I think thats why its not the
same???
Original comment by maher.ka...@gmail.com
on 21 Oct 2010 at 2:49
mem_node is consuming 81% of your memory.
it isn't the same problem I had.
Original comment by renato.o...@gmail.com
on 21 Oct 2010 at 2:50
I am running 14371 not 14809 from that dump, could that by why its not the same
"problem" ??
Any idea why mem_node would swallow all that RAM???
Original comment by maher.ka...@gmail.com
on 21 Oct 2010 at 2:51
root@Cache-Lusca1:~# squidclient -h 172.16.99.101 mgr:mem
HTTP/1.0 200 OK
Server: Lusca/LUSCA_HEAD-r14809
Date: Thu, 21 Oct 2010 16:08:58 GMT
Content-Type: text/plain
Expires: Thu, 21 Oct 2010 16:08:58 GMT
X-Cache: MISS from Lusca-Cache
X-Cache-Lookup: MISS from Lusca-Cache:3128
Connection: close
Current memory usage:
Pool Obj Size Allocated In Use
Hit Rate
(bytes) (#) (KB) high (KB) high (hrs) impact (%total) (#) (KB) high (KB) (%num) (number)
2K Buffer (no-zero) 2048 2 4 18 0.01 0
2 4 18 100 241042
4K Buffer (no-zero) 4096 1787 7148 7768 0.01 3
1787 7148 7768 100 23767
8K Buffer (no-zero) 8192 316 2528 2544 0.00 1
316 2528 2544 100 5611
16K Buffer (no-zero) 16384 0 0 16 0.02 0
0 0 16 -1 2
Short Strings (no-zero) 36 159045 5592 5593 0.00 2
159045 5592 5593 100 2085443
Medium Strings (no-zero) 128 3656 457 464 0.00 0
3656 457 464 100 173017
Long Strings (no-zero) 512 1078 539 553 0.00 0
1078 539 553 100 113680
event 48 12 1 1 0.01 0 12
1 1 100 5775
close_handler 24 2807 66 71 0.00 0 2807
66 71 100 94795
acl 64 15 1 1 0.16 0 15
1 1 100 15
acl_ip_data 24 8 1 1 0.16 0 8
1 1 100 8
acl_list 24 31 1 1 0.16 0 31
1 1 100 31
relist 80 5 1 1 0.16 0 5
1 1 100 5
CacheDigest 32 1 1 1 0.16 0 1
1 1 100 1
dwrite_q 48 0 0 1 0.16 0 0
0 1 -1 47015
FwdServer 24 353 9 10 0.01 0 353
9 10 100 30688
HttpReply 168 15578 2556 2556 0.00 1 15578
2556 2556 100 95806
mem_node (no-zero) 4112 49165 197429 197429 0.00 85
49165 197429 197429 100 124099
StoreEntry 88 43922 3775 3775 0.00 2 43922
3775 3775 100 62203
MemObject 272 15425 4098 4098 0.00 2 15425
4098 4098 100 41799
request_t 1384 447 605 661 0.00 0 447
605 661 100 55139
helper_request 64 0 0 1 0.05 0 0
0 1 -1 18504
ClientInfo 352 3 2 2 0.02 0 3
2 2 100 3
storeSwapLogData 72 0 0 1 0.16 0 0
0 1 -1 47015
buf_t 80 1 1 1 0.02 0 1
1 1 100 161220
AUFS IO State data 48 43 3 3 0.03 0 43
3 3 100 26606
AUFS Queued read data 64 0 0 1 0.03 0 0
0 1 -1 10174
AUFS Queued write data 56 0 0 7 0.05 0
0 0 7 -1 43926
aio_ctrl 104 0 0 2 0.01 0 0
0 2 -1 159654
wordlist 16 11 1 1 0.16 0 11
1 1 100 14
cbdata http_port_list (1001) 136 1 1 1 0.16 0
1 1 1 100 1
cbdata acl_access (1002) 56 28 2 2 0.16 0
28 2 2 100 28
cbdata RemovalPolicy (1003) 104 4 1 1 0.16 0
4 1 1 100 4
intlist 16 1 1 1 0.16 0 1
1 1 100 1
cbdata body_size (1004) 64 3 1 1 0.16 0
3 1 1 100 3
ipcache_entry 128 1579 198 198 0.00 0 1579
198 198 100 1687
fqdncache_entry 160 3 1 1 0.16 0 3
1 1 100 3
cbdata idns_query (1005) 8680 0 0 111 0.15 0
0 0 111 -1 1684
cbdata helper (1006) 136 2 1 1 0.16 0 2
1 1 100 2
cbdata helper_server (1007) 152 300 45 45 0.16 0
300 45 45 100 300
cbdata redirectStateData (1008) 72 0 0 1 0.15
0 0 0 1 -1 1250
cbdata storeurlStateData (1009) 72 0 0 1 0.05
0 0 0 1 -1 17254
HttpHeaderEntry 40 133600 5219 5220 0.00 2 133600
5219 5220 100 1733196
HttpHdrRangeSpec 16 7 1 1 0.00 0 7
1 1 100 1195
HttpHdrRange 16 7 1 1 0.00 0 7
1 1 100 1188
HttpHdrContRange 24 59 2 2 0.00 0 59
2 2 100 2287
HttpHdrCc 40 10386 406 406 0.00 0 10386
406 406 100 100852
cbdata Logfile (1012) 4192 1 5 5 0.16 0 1
5 5 100 1
MD5 digest 16 43922 687 687 0.00 0 43922
687 687 100 82283
aio_thread 40 32 2 2 0.16 0 32
2 2 100 32
aio_request 96 0 0 1 0.09 0 0
0 1 -1 159654
cbdata RebuildState (1014) 112 0 0 1 0.16 0
0 0 1 -1 3
pconn_data 32 294 10 10 0.00 0 294
10 10 100 8699
pconn_fds 32 289 10 10 0.00 0 289
10 10 100 8699
cbdata generic_cbdata (1016) 32 33 2 2 0.03 0
33 2 2 100 18109
cbdata ConnStateData (1017) 336 1801 591 641 0.01 0
1801 591 641 100 19691
cbdata RemovalPurgeWalker (1018) 72 0 0 1 0.16
0 0 0 1 -1 1767
cbdata clientHttpRequest (1019) 1160 437 496 544 0.00
0 437 496 544 100 55150
cbdata aclCheck_t (1020) 360 3 2 2 0.15 0
3 2 2 100 379543
cbdata store_client (1021) 152 454 68 75 0.00 0
454 68 75 100 64196
cbdata FwdState (1022) 112 353 39 43 0.01 0
353 39 43 100 30688
cbdata ps_state (1023) 200 0 0 1 0.16 0
0 0 1 -1 30688
cbdata ConnectStateData (1024) 96 75 8 11 0.02
0 75 8 11 100 12947
cbdata HttpStateData (1025) 136 282 38 43 0.00 0
282 38 43 100 29718
cbdata storeIOState (1026) 136 43 6 9 0.03 0
43 6 9 100 26606
cbdata AddVaryState (1027) 160 0 0 1 0.14 0
0 0 1 -1 1283
cbdata LocateVaryState (1028) 144 0 0 1 0.01 0
0 0 1 -1 2247
VaryData 32 4 1 1 0.01 0 4
1 1 100 2247
cbdata ErrorState (1029) 160 18 3 7 0.01 0
18 3 7 100 1855
Total 487732 232640 232834 0.00 100 487732
232640 232834 100 6464099
Cumulative allocated volume: 1.80 GB
Current overhead: 14153 bytes (0.006%)
Idle pool limit: 0.00 MB
memPoolAlloc calls: 6464099
memPoolFree calls: 5976366
String Pool Impact
(%strings) (%volume)
Short Strings 97 83
Medium Strings 2 7
Long Strings 1 8
Other Strings 0 2
Large buffers: 0 (0 KB)
------------
This is the dump on 14809 ... I am now once again getting the "2010/10/21
12:09:56| clientEatRequestBodyHandler: FD 1307: no more data left in socket;
but request header says there should be; aborting for now" message in cache.log
Thanks,
Maher
Original comment by maher.ka...@gmail.com
on 21 Oct 2010 at 4:11
i have same issues on
uname -a
Linux localhost.localdomain 2.6.32.26-175.fc12.x86_64 #1 SMP Wed Dec 1 21:39:34
UTC 2010 x86_64 x86_64 x86_64 GNU/Linux
Current memory usage:
Pool Obj Size Allocated In Use
Hit Rate
(bytes) (#) (KB) high (KB) high (hrs) impact (%total) (#) (KB) high (KB) (%num) (number)
2K Buffer (no-zero) 2048 2 4 586 210.06 0
2 4 586 100 107383617
4K Buffer (no-zero) 4096 411 1644 6936 165.40 0
411 1644 6936 100 28321410
8K Buffer (no-zero) 8192 17 136 1008 84.81 0
17 136 1008 100 9332453
16K Buffer (no-zero) 16384 0 0 96 11.12 0
0 0 96 -1 443
32K Buffer (no-zero) 32768 0 0 64 142.72 0
0 0 64 -1 151
64K Buffer (no-zero) 65536 0 0 192 142.72 0
0 0 192 -1 144
Short Strings (no-zero) 36 529616 18620 18657 0.36 1
529616 18620 18657 100 1203370620
Medium Strings (no-zero) 128 36170 4522 4591 1.27 0
36170 4522 4591 100 199360375
Long Strings (no-zero) 512 20150 10075 10098 0.01 1
20150 10075 10098 100 29131637
event 48 8 1 4 63.86 0 8
1 4 100 2429315
close_handler 24 805 19 110 165.40 0 805
19 110 100 87442652
acl 64 59 4 4 218.36 0 59
4 4 100 118
acl_ip_data 24 23 1 1 218.36 0 23
1 1 100 46
acl_list 24 189 5 5 218.36 0 189
5 5 100 378
relist 80 4832 378 378 218.36 0 4832
378 378 100 9664
dwrite_q 48 0 0 1 218.36 0 0
0 1 -1 23835966
FwdServer 24 194 5 35 165.40 0 194
5 35 100 20193563
HttpReply 168 27627 4533 4547 0.01 0 27627
4533 4547 100 54997311
mem_node (no-zero) 4112 31465 126352 128492 0.71 10
31465 126352 128492 100 149972212
StoreEntry 88 10082748 866487 988390 218.34 68
10082748 866487 988390 100 33709630
MemObject 272 27503 7306 7322 0.01 1 27503
7306 7322 100 27770986
request_t 1384 31742 42902 42961 0.01 3 31742
42902 42961 100 26306369
helper_request 64 0 0 5 73.29 0 0
0 5 -1 11626510
storeSwapLogData 72 0 0 1 218.36 0 0
0 1 -1 23835966
buf_t 80 0 0 2 1.26 0 0
0 2 -1 120131943
AUFS IO State data 48 33 2 7 84.82 0 33
2 7 100 11089475
AUFS Queued read data 64 0 0 2 12.67 0 0
0 2 -1 5617799
AUFS Queued write data 56 0 0 7 69.39 0
0 0 7 -1 33736706
aio_ctrl 104 1 1 18 218.34 0 1
1 18 100 115263020
wordlist 16 20 1 1 218.36 0 20
1 1 100 48
cbdata http_port_list (1001) 136 2 1 1 116.21 0
2 1 1 100 4
cbdata RemovalPolicy (1002) 104 4 1 1 218.36 0
4 1 1 100 4
intlist 16 2 1 1 218.36 0 2
1 1 100 4
cbdata acl_access (1003) 56 181 10 10 218.36 0
181 10 10 100 362
cbdata body_size (1004) 64 3 1 1 218.36 0
3 1 1 100 6
ipcache_entry 128 7372 922 952 163.48 0 7372
922 952 100 525024
fqdncache_entry 160 1776 278 278 0.11 0 1776
278 278 100 11242
cbdata idns_query (1005) 8680 0 0 424 12.67 0
0 0 424 -1 542765
cbdata helper (1006) 136 1 1 1 218.36 0 1
1 1 100 1
cbdata helper_server (1007) 152 5 1 1 218.36 0
5 1 1 100 25
cbdata storeurlStateData (1008) 72 0 0 6 73.29
0 0 0 6 -1 11626510
HttpHeaderEntry 40 454078 17738 17773 0.01 1 454078
17738 17773 100 1012522342
HttpHdrRangeSpec 16 1 1 2 84.82 0 1
1 2 100 710826
HttpHdrRange 16 1 1 2 84.82 0 1
1 2 100 702925
HttpHdrContRange 24 156 4 37 165.29 0 156
4 37 100 930582
HttpHdrCc 40 23521 919 923 0.36 0 23521
919 923 100 75354286
cbdata Logfile (1011) 4192 1 5 5 218.36 0 1
5 5 100 2
MD5 digest 16 10082748 157543 179708 218.34 12
10082748 157543 179708 100 79439640
aio_thread 40 64 3 3 218.36 0 64
3 3 100 64
aio_request 96 1 1 17 218.34 0 1
1 17 100 115263020
cbdata RebuildState (1013) 112 0 0 1 218.36 0
0 0 1 -1 3
cbdata RemovalPurgeWalker (1015) 72 0 0 1 218.34
0 0 0 1 -1 2942370
cbdata ConnStateData (1016) 336 412 136 572 165.40 0
412 136 572 100 26772342
cbdata clientHttpRequest (1017) 1144 214 240 1683 165.40
0 214 240 1683 100 26300165
cbdata aclCheck_t (1018) 344 2 1 9 12.67 0
2 1 9 100 190065341
cbdata store_client (1019) 152 73429 10900 10907 0.01 1
73429 10900 10907 100 27175284
cbdata generic_cbdata (1020) 32 18 1 4 92.12 0
18 1 4 100 6007521
cbdata FwdState (1021) 112 194 22 161 165.40 0
194 22 161 100 20165074
cbdata ps_state (1022) 200 0 0 1 218.21 0
0 0 1 -1 20193563
cbdata ConnectStateData (1023) 96 11 2 92 143.07
0 11 2 92 100 20269438
cbdata storeIOState (1024) 136 33 5 20 84.82 0
33 5 20 100 11089475
cbdata HttpStateData (1025) 136 4448 591 623 92.36 0
4448 591 623 100 20102920
cbdata AddVaryState (1026) 168 0 0 1 46.53 0
0 0 1 -1 880081
cbdata ErrorState (1027) 160 2 1 106 143.07 0
2 1 106 100 1214268
cbdata LocateVaryState (1028) 144 0 0 1 208.06 0
0 0 1 -1 1057
VaryData 32 1 1 1 164.31 0 1
1 1 100 1057
cbdata SslStateData (1029) 120 0 0 2 194.56 0
0 0 2 -1 28489
cbdata RemovalPolicyWalker (1030) 56 0 0 1 199.46
0 0 0 1 -1 9
Total 21442296 1272306 1274605
0.71 100 21442296 1272306 1274605 100 3995708619
Cumulative allocated volume: 1.38 TB
Current overhead: 14338 bytes (0.001%)
Idle pool limit: 0.00 MB
memPoolAlloc calls: -299258677
memPoolFree calls: -320700974
String Pool Impact
(%strings) (%volume)
Short Strings 90 55
Medium Strings 6 13
Long Strings 3 30
Other Strings 0 2
Large buffers: 0 (0 KB)
why memPoolAlloc calls and memPoolFree calls value minus ... ?
Original comment by hedy.joe@gmail.com
on 19 Dec 2011 at 4:13
Original issue reported on code.google.com by
renato.o...@gmail.com
on 22 Jun 2010 at 1:02