Run the fuzzing run on a sufficiently long running market (at least 24h null-chain time and a good number orders / second on average) under two identical scenarios except:
The aim would be to run this, settle the markets, make sure that data node is caught up and then compare the size of the Postgres database in each case.
Run a separate experiment where you vary the mark price frequency:
Run the fuzzing run on a sufficiently long running market (at least 24h null-chain time and a good number orders / second on average) under two identical scenarios except:
(search, initial, release) = (1.050, 1.100, 1.150)
(search, initial, release) = (1.001, 2.000, 4.000)
.The aim would be to run this, settle the markets, make sure that data node is caught up and then compare the size of the Postgres database in each case.
Run a separate experiment where you vary the mark price frequency: