Closed david415 closed 9 years ago
(pprof) top 10
2959.56kB of 3004.41kB total (98.51%)
Dropped 84 nodes (cum <= 15.02kB)
Showing top 10 nodes out of 24 (cum >= 590.09kB)
flat flat% sum% cum cum%
2000kB 66.57% 66.57% 2000kB 66.57% github.com/david415/HoneyBadger.(*pageCache).grow
586.06kB 19.51% 86.08% 586.06kB 19.51% github.com/google/gopacket/pcap._Cfunc_GoBytes
196.77kB 6.55% 92.62% 196.77kB 6.55% github.com/david415/HoneyBadger/types.NewRing
80.37kB 2.67% 95.30% 80.37kB 2.67% fmt.Sprintf
44.09kB 1.47% 96.77% 44.09kB 1.47% github.com/google/gopacket/layers.errorFunc
20.20kB 0.67% 97.44% 20.20kB 0.67% github.com/david415/HoneyBadger.(*Connection).stateDataTransfer
12.14kB 0.4% 97.84% 208.91kB 6.95% github.com/david415/HoneyBadger.(*DefaultConnFactory).Build
9.25kB 0.31% 98.15% 2009.25kB 66.88% github.com/david415/HoneyBadger.newPageCache
6.66kB 0.22% 98.37% 235.74kB 7.85% github.com/david415/HoneyBadger.(*Dispatcher).setupNewConnection
4.03kB 0.13% 98.51% 590.09kB 19.64% github.com/david415/HoneyBadger.func·001
(pprof)
ha. yeah the page cache implementation that we use (from google's gopacket's tcpassembly) has got yet another accounting bug. in this case if we hit the page cache max thresh we only flush a single page... whereas we need to flush until we are below the thresh. fixed in https://github.com/david415/HoneyBadger/commit/751664a03f592e412cddc9720a543904be2bec8f
Currently we suspect a memory leak. Let's find out! We are going to try Mischief's recommendation and use Dave Cheney's golang profiler:
https://github.com/davecheney/profile