Open sebheitzmann opened 2 weeks ago
It's the same problem without compaction ... To be continued
Thanks for checking! I assume it's the behavior of the super sstable and its memory mapped file. You can run a memprofile and see where all the memory goes :)
The step up is from the memstore flush, not the compaction. I continue to find out why. Pprof don't give me usefull information.
I assume, because of the async flushing from the channel, the memory temporarily increases. If you can't write fast enough, the memory will fill up quickly again with a second memstore, which makes the apparence that it doubles.
The second memstore flush is then waiting for the channel to free again, so this is proving some backpressure in this situation
The big step is the flush. And I stop all action on the database. The memory should drop back to the previous memory usage after the flush.
Showing top 10 nodes out of 55 flat flat% sum% cum cum% 24869.75kB 78.88% 78.88% 24869.75kB 78.88% bytes.growSlice 4097.37kB 13.00% 91.88% 4097.37kB 13.00% github.com/thomasjungblut/go-sstables/recordio.BufferedIOFactory.CreateNewWriter 1024.02kB 3.25% 95.13% 1024.02kB 3.25% google.golang.org/protobuf/internal/impl.consumeBytesNoZero 512.56kB 1.63% 96.75% 512.56kB 1.63% runtime.makeProfStackFP
It seem that the writer don't release the memory
guess it's the buffer pool? https://github.com/thomasjungblut/go-sstables/blob/main/recordio/file_writer.go#L60
maybe, i'm investigating. This pool should be released after the close.
Yeah, the GC behavior depends a lot on the memory pressure of the machine. There's a great blog post that highlights this: https://tip.golang.org/doc/gc-guide
I think that the compaction process has a memory leak.
I will try to figure out why.