Changed the way weights are calculated, uses previous weight as functions to store the next one in - saves preallocation time.
Changed weight tests to take into account of this, and demos to add an extra argument in
_Made all ensembles within ensemble_transform_update.py and seamless_coupling_update.py vector Functions and thus this saves all preallocation costs on those ensembles._
Tests have been changed to adapt to this change.
Benchmark tests have been coming through positive, with the new scheme matching the old one for speed at least where only one update is done, and with relatively small N. Then as N increases, or we have consecutive transforms, the preallocation costs decrease.
Requirements:
Look at how the same can be done for weights, treating them as vector Functions.
[x] Check if the heavy cost of injecting / coarsen localising the vector functions is worth it to put into emd_kernel.py cost tensor calculations to remove ALL preallocation.
Alterations:
Changed the way weights are calculated, uses previous weight as functions to store the next one in - saves preallocation time.
Changed weight tests to take into account of this, and demos to add an extra argument in
_Made all ensembles within
ensemble_transform_update.py
andseamless_coupling_update.py
vectorFunction
s and thus this saves all preallocation costs on those ensembles._Tests have been changed to adapt to this change.
Benchmark tests have been coming through positive, with the new scheme matching the old one for speed at least where only one update is done, and with relatively small N. Then as N increases, or we have consecutive transforms, the preallocation costs decrease.
Requirements:
Look at how the same can be done for weights, treating them as vectorFunction
s.emd_kernel.py
cost tensor calculations to remove ALL preallocation.