lmoffatt / macro_dr

GNU General Public License v3.0
1 stars 1 forks source link

find a better algorithm for dts that includes variance #211

Closed lmoffatt closed 3 months ago

lmoffatt commented 4 months ago

The temperature ladder is parameterized using the step sizes $S_i$

S_i \equiv  log(T_i-T_{i-1})

that means

T_{i} =  T_{i-1}- exp(S_i)
\frac{1}{\beta_{i} }= \frac{1}{\beta_{i-1}}- exp(S_i)
\beta_{i} =\frac{1}{ \frac{1}{\beta_{i-1}}- exp(S_i)}
\beta_{i} = \frac{\beta_{i-1}} {1- \beta_{i-1} \cdot  exp(S_i)}

The algorithm step is therefore:

S_i(t+1) =S_i(t) +  \kappa(t) \cdot  (A_i (t) − A_{i+1} (t))  

Values of A are measured at each iteration.


 A_i(t) = \frac{Acc_i(t)} {deltaEvidenceVariance_i(t)}

deltaEvidenceVariance_i(t) = \frac{ (\beta_{i + 1} - \beta_i)^2}{2} \cdot  (\sigma^2 logL_i + \sigma^2 logL_{i+1})

Now the derivative of deltaEvidenceVariance with respect to beta is

lmoffatt commented 4 months ago
\frac{\partial \beta_{i}}{\partial S_i} = \frac{\beta_{i-1}^2 \cdot  exp(S_i)} {\left(1- \beta_{i-1} \cdot  exp(S_i)\right)^2}

\frac{\partial (deltaEvidenceVariance_i)^{-1} }{\partial \beta_i(t)} =
( \beta_{i} - \beta_{i+1}) \cdot  (\sigma^2 logL_i + \sigma^2 logL_{i+1}) \cdot \partial \beta_i