tapios / risk-networks

Code for risk networks: a blend of compartmental models, graphs, data assimilation and semi-supervised learning
Other
2 stars 2 forks source link

Batching Nodally defined model parameters #169

Open odunbar opened 3 years ago

odunbar commented 3 years ago

We have a batching procedure for observed states in the DataAssimilator. This only batches the states and not model parameters. It is inefficient when dealing with the TransitionRates type model parameters which can be nodally defined.

Consider observing 100 obseved statess (and only 1 state per node) in 4 batches of 25. Assume the TransitionRates clinical parameters are made up of 3 constant parameters and one nodally defined parameter, in the order [const, const, nodal, const], the total size of the parameters is 1+1+100+1 = 103. In the current setup we will do 4 batches of 25 states and 103 parameters for each EAKF batch. If we found the correct nodes that relate to the batched states we could do 4 batches of 25 states and 1+1+25+1=28 parameters.

The problem is to match which model parameters will go into which batch as the clinical parameters, when loaded into the data assimilator are all concatenated together into an np.array.

Coauthored with @jinlong83