Open thomasckng opened 3 months ago
Currently, the autotune that is implemented only tunes the step size before the flowMC training loop is started and no tuning is done during training the NF. This can be problematic if the autotune has a goal local acceptance of 0.3 -- 0.5, since the local acceptance can drop during sampling, see for instance this example run. Also, the optimal acceptance rate of MALA seems to be 0.574 according to some sources such as [1] and [2] although these might be theoretically derived and therefore not of much relevance for GW PE.
It might be good, if desired, to also tune during the sampling stage as well, e.g. after each loop training loop iteration or after a few cycles. An attempt at this which seemed rather successful back then can be found (in an older flowMC version) here. The idea I had there was to multiply the MALA mass matrix with a factor gamma_T
which is adapted based on the current mean local acceptance and the desired/target local acceptance. However, this is a bit of a hacky way to do it since the gamma_T
multiplier has to be carried around over flowMC: simply making an attribute of e.g. Sampler did not work for me due to JAX being JAX, but perhaps others with more experience (@kazewong) could fix that.
[1] https://www.jstor.org/stable/2985986 [2] https://www.sciencedirect.com/science/article/pii/S0304414907002177
Use autotune for MALA in RunManager can allow the user not to input a mass matrix for the MALA local sampler, which could simplify the setup.