Open HelloKashif opened 3 years ago
Hi @HelloKashif ,
Thank you for raising this issue and flagging the misleading documentation. We are looking into this now and will get back to you asap. Thanks!
Hey @HelloKashif ,
Thanks for flagging this. I was able to reproduce the insane memory consumption. From what I saw, the mainnet
node consumes ~150GB
of memory so our memory guidance is definitely off.
This was more of a reference implementation to provide an example of Rosetta for UTXO chains so I would hold off any production use. That said, we're going to investigate on what went wrong.
Describe the bug The btc mainnet fully synced node consumes upward of 64GB ram to run (The recommended ram size on docs is 16GB). We have tried running on small memory machines (<20GB or <40GB) by tweaking the badger config and using swap memory instead but giving any lesser amount of ram causes the indexing/block adding to be extremely slow. Newer blocks would constantly lag behind from raw node by about 10min+. Moreover small memory nodes will constantly crash with "Error EOF" issues and are very slow to get data via Rosetta APIs.
Current badger settings MaxTableSize = PerformanceMaxTableSize (Also tried the default size but no effect) MaxLogSize = PerformanceLogSize (Also tried the default size but no effect) CacheSize = 6GB (This seems to have the biggest impact on reducing memory usage, but smaller memory causes slow indexing)
To Reproduce Run a fully synced mainnet node with swap disabled.
Expected behavior Expecting to be able to run the node in small machines.
Additional context We should either update the docs to reflect the real usage value or some more tuned settings to run this effectively at scale.
Some screenshots of memory usage: (Note: the sudden drops are Rosetta crashes sadly, even after giving such a big vm)