Open bachase opened 1 month ago
Some pain points off the top of my head:
--import
does no validation"), no public full history dumps are available, not even ones for certain ledger ranges@MarkusTeufelberger we are considering refreshing RocksDB dependency to a more recent version (it will require code changes), would that help with the last point ?
I'm not sure if it is usable for full history servers, I ran into issues with it years ago, switched to NuDB and never looked back tbh. In general it would help somewhat, if it were indeed a viable option, yes.
Since it is (too) easy to criticize, here are some potential solutions for the points I brought up:
transaction.db
and ledger.db
files by ledger height (e.g. every million or 10 million ledgers?) into their own file. SQLite can handle really big database files, but this current state seems like it is getting into edge-case territory already.node.db
files similarly should be split by ledger height (this is what shards did). There will be some duplication of rarely changing inner and leaf nodes across these files, but that's a rather trivial overhead imho.rippled
clusters (probably connected through some middleware that does the actual request parsing and routing, like for xrplcluster). Might be more of a documentation and middleware issue and a case of "well, you just need to do it this way".Coming in cold after a few years away from rippled development ...
I still see History Sharding
from this link. Is it the same thing removed in https://github.com/XRPLF/rippled/pull/5066 🤔 ?
Clio 's full history node uses less than 7TB data storage and maintains more off-chain indexes. But FH Clio needs another full history rippled and months of time to setup 🥲 .
Summary
Recent issues related to the size and maintenance of sqlite databases (#5102 and #5095) highlight infrastructure challenges for users that operate full history servers.
Solution
This issue does not propose a solution, but is meant as a spot for ongoing discussion of requirements and corresponding solutions, continuing the conversation that had started in #5102.