Generating a genesis ledger with 300k accounts takes about 20s, but writing it to disk takes > 15 minutes (on my machine). If we want to test larger ledger sizes, this will quickly become unacceptably slow.
Generating a test ledger can be tested by passing a config file to the daemon or runtime_genesis_ledger.exe with a config such as
{
"genesis": {
"genesis_state_timestamp": "2021-06-03T00:00:00Z"
},
"proof": {
"ledger_depth": 19
},
"ledger": {
"name": "test-300k",
"num_accounts": 300000,
"accounts": [
{
"pk": "B62qmnkbvNpNvxJ9FkSkBy5W6VkquHbgN2MDHh1P8mRVX3FQ1eWtcxV",
"balance": "6000000000.000000000",
"delegate": "B62qmnkbvNpNvxJ9FkSkBy5W6VkquHbgN2MDHh1P8mRVX3FQ1eWtcxV",
"sk": null,
"timing": null,
"_comment": "This is the demo key, you can find the private key in dockerfiles/auxiliary_entrypoints/01-run-demo.sh"
}] }
}
Proposed solution: use a batched rocksdb update to write all of the data at once. We already have this capability exposed on the database ledger; we just need to hook it up here.
Generating a genesis ledger with 300k accounts takes about 20s, but writing it to disk takes > 15 minutes (on my machine). If we want to test larger ledger sizes, this will quickly become unacceptably slow.
Generating a test ledger can be tested by passing a config file to the daemon or
runtime_genesis_ledger.exe
with a config such as