Closed romange closed 1 month ago
btw, maybe (5) vastly differs between experiments just because RSS peaks quickly, and we sample it at random times in between, and maybe it might reach 14.8GB with both setups 🤷🏼 (have not checked it)
Ok, I got totally confused. in debug populate 10000 test 1000000
the first integer is count and the second is - value length. it is quite a big value so it is expected to require a big margin so the problem is not as serious as I thought @ashotland
still, understanding how we limit the memory usage is important.
Closing - will be handled by #3668
the experiment is as follows:
step(3): used_memory_peak_rss:10080092160
step(5): used_memory_peak_rss:13399830528
increase of 32% is a lot!
if we use
debug populate 1000 test 10000000
i.e. smaller values but with the same used memory, then (3) used_memory_peak_rss:10051952640 (5) used_memory_peak_rss:14809550848which is even more surprising. In short, we need to learn why we spend so much memory during the snapshotting and devise mechanisms to limit it to well defined (expected) margin.
cc @ashotland :)