Closed buckie closed 7 years ago
Thanks for the report!
Hey @buckie I've just started looking at this, and maybe I'm being dense: aren't you just allocating a bunch of bytestrings during the upward slopes and then consuming them during the (very steep) downward slopes? The expected residency for the bytestrings you allocate in createAndWriteMsgs
would seem to be what's expected at first glace for -hm
and -hc
; I'm not sure how to interpret retainer profiles yet.
I think that your interpretation is correct. Though I have run into space leaks with unagi-chan
(or specifically I had a leak that I thought it caused and the leak went away when I replaced it with Chan
) this is not a proper replication of the problem.
I added a prime sieve to mark the end of various parts of the replication code and got the asymmetric triangle (gradual slope up, steep slope down with a step 1/2 way through) that I was expecting to see. Do you know what the x-axis's "seconds" means for profiling (or can you point me at docs for it?) because it's certainly not real-time seconds otherwise thread delay would have helped clear this up (as I thought it was).
Sorry for the inconvenience and my failure to replicated the issue I saw before.
Do you know what the x-axis's "seconds" means for profiling?
Huh, I guess I always assumed it was wall-clock time!
Sorry for the inconvenience and my failure to replicated the issue I saw before.
No problem at all. I really appreciate the effort you put into repro-ing this. And if you have time to open another ticket when you can track down the leak you were observing, that would be greatly appreciated!
I've used
unagi-chan
in for around a year on a couple of projects. I had always noticed climbing memory usage generally but never needed to track down why. A few weeks ago, that changed and I needed to track down our space leak. Sadly, all signs pointed tounagi-chan
as the culprit and once purged from the app the leak disappeared. I've replicated the issue we were seeing in a fork as I was hoping to that added profiling flags to the base library could get better insights into what is going on.https://github.com/buckie/unagi-chan/tree/bug/spaceleak
This, sadly, didn't work. It seems that there's too much inlining without SCC annotations to get a good look at the root cause.
The replication creates 10k large-ish byte strings, put's them on a channel, read's half (summing their length), performs a major GC, reads the rest, performs another major GC. This is done as one thread and then as two. I've included the ps files in the repo plus a couple scripts to replicate the environment and make running the tests easier.
I've done some other tests as well, where the reader thread outruns the writer thread, and those show the expected flat (or pyramid-ish because I drained half, thread delay 1s, drain the rest) memory usage. My interpretation of the results is that memory isn't being allowed GC unless the channel is fully empty.