Open matthewmrichter opened 6 years ago
Hmm, do you see the same behavior with v0.0.4?
ref. https://github.com/coreos/zetcd/compare/v0.0.4...v0.0.5
Yep..
On Thu, May 24, 2018, 12:58 PM Gyuho Lee notifications@github.com wrote:
Hmm, do you see the same behavior with v0.0.4?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/coreos/zetcd/issues/97#issuecomment-391786818, or mute the thread https://github.com/notifications/unsubscribe-auth/AERhLEN_JOvXzwaG0K-51MVX3aIctl8fks5t1uacgaJpZM4UMdcE .
It would be best if you can provide reproducible steps. And also try to heap-profile zetcd.
I'm new to go, could you provide some guidance on enabling heap-profiling?
@matthewmrichter Please enable profile via zetcd --pprof-addr
flag.
And do something like
go tool pprof -seconds=30 http://zetcd-endpoint/debug/pprof/heap
go tool pprof ~/go/src/github.com/coreos/etcd/bin/etcd ./pprof/pprof.localhost\:2379.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz
go tool pprof -pdf ~/go/src/github.com/coreos/etcd/bin/etcd ./pprof/pprof.localhost\:2379.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz > ~/a.pdf
Where you need to replace */bin/etcd
binaries with the zetcd
binary.
I would first try to reproduce without containerization.
Great, I'll put some time into that. Thanks so far
Ok, I think the main offender here may actually be Marathon(https://mesosphere.github.io/marathon/), not Mesos. The usage really shoots up when Marathon starts.
I converted zetcd to run as a service rather than containerized. I took a heap profile - this is shortly after startup. It already blasts up to 5 gig shortly after startup. I will keep an eye on htop for a little while to see if it begins to approach 7+ gig as well and provide another profile.
Steps to reproduce -
I gave it a while, and the process according to htop had gotten up to 6 gigs. Here's a second profile aftertaken at this point, looks mostly the same:
Here's a qustion, based on the bottleneck being in that "ReadPacket" method..
Currently, I have etcd running on server A and marathon/zetcd on server B. Would it make more sense for zetcd and etcd to live on server A together rather than having zetcd reach out to etcd across the LAN?
We're using zetcd (running in a container - from the quay.io/repository/coreos/zetcd tag v0.0.5) as a middleware between etcd and Mesos. We're experiencing a behavior where the memory usage of the zetcd seems to continue to climb and climb and climb gradually. It was overflowing a 4gig ram instance very quickly, so moved it to a host with 8 gigs, but the zetcd container still seems to continue to grow and grow in memory usage.
I'd be interested in helping solve this.. is there anything that I can provide to help expose the memory leak? Is there any automatic garbage collecting or anything like that that can be implemented? Are there any docker container launch parameters to contain its hunger for memory?