// When it starts up, it will sync from other database(s).
// Will look at the sync_from peers.
// When for every peer, it establishes a connection.
// When it first connects, it will attempt to sync all records.
// Want it to do so with the current db if possible.
// Keeping the same structure tables / key values looks important.
// More complex sharding:
// Will be able to know which server to send any request over to when the data is sharded.
// When receiving data, will split it up to the relevant machines.
// Does not seem possible / easy at all to keep grouped records together.
// Want flow control likely improved so that when we receive keys, we can then check for them in the db, and send off for them if we don't have them.
// Also, get_table_rows_hash would be useful for checking some tables are the same.
// or table_model_hash
// or db_core_hash
// table_records_hash
// the table_model_hash would be nice if it included index rows too.
// Should quickly tell if syncing the core is possible.
// Direct sync
// Otherwise would need to do some value translation / get in denormalised forms. Could be slower.
// Want good progress updates.
// Would need to sync upon start.
// Soon, want to work on a sharded db, have it running on 8 servers.
// Could try 4 server shard processes on 1 node. Does not increase storage that way, would increase throughput, and be a way to test it.
// Separate machines would be better for the moment.
// Would definitely be nice to go wide for storage and speed.
// Could make it so that any server then sends on storage operations to the relevant servers.
// It would use a formula to work out from the binary key which of the shards it goes to.
// System could seamlessly change between sharding modes.
// Different nodes could have different sharding rules.
// Would know which machine to send any data to.
// When it starts up, it will sync from other database(s). // Will look at the sync_from peers.