Open LesnyRumcajs opened 4 months ago
Some questions that need to be answered as well, as outlined by @ruseinov
How much space a given object takes up stored in ParityDB (potentially with a comparison of uncompressed/compressed snapshot). How many keys are being deleted on each GC run - I'm working on that as we speak. Whether or not the GC actually manages to run or the node gets restarted somewhere along the way. That should be visible in logs, but it will be definitive with my latest change.
Issue summary
It turned out we don't have a clear understanding on what should we expect from a long-running node disk usage given that garbage collection should be working fine now. Should it grow indefinitely? If yes, at what rate?
Sample case; given a 100 GiB forest node, how long can it run (assuming no other issues with chain/upgrades/etc) without interventation on:
If it cannot run indefinitely - why is that? What takes space in the database that required a restart and a new bootstrap? Can we potentially do better?
Outcomes:
Other information and links