Closed jamesturner246 closed 1 month ago
I've already messaged @jamesturner246 about this, but I'll post here too for posterity.
I ran an HPC that lasted for about an hour (using the HLM_India
example) and saved the "adjustments" broadcast from the baseline simulation to the intervention one in JSON format. In total the file size was ~1MB, so even given that real simulations are likely to run for longer, this seems perfectly manageable.
The current approach of computing baseline and intervention is to run both at the same time, communicating messages over a shared channel. This is wasteful because we shouldn't need to keep recomputing baseline for each intervention.
We should profile how long it takes and how much data needs to be saved if we run the baseline simulation completely up-front, saving reusable communication messages to be loaded during intervention runs.
Some things we might consider:
NB: we might be able to use some kind of incremental compression method in a library such as HDF5. Need to investigate if and how we can do this.
Also might investigate serialised text messages vs raw binary messages for speedier saving and loading.