Finally in order to find the maximum of the likelihood distribution, we
realized that the method of taking only a small jumping factor and restart
from the bestfit obtained with the global run was not efficient. Indeed by
doing so the -Loglike effectively decreases a bit, but iterating the
process leads each time to small decreases but at the end the method do not
seem to converge really fast. This is obviously caused by the fact that the
probability for accepting a new point is too big, and leads to a kind of
dispersive behavior such as the one you can find in the attached picture
wo_temp.png.
We therefore decreased this probability in adding a ''temperature''
parameter T such as the probability is now given by exp((Loglike(n) -
Loglike(n-1))*(1/T)), in analogy with the Boltzmann factor, and the method
seems much more appropriate for minimization procedures, as you can see in
w_temp.png. For our runs we chose T=10^-2 and still 0.1 for the jumping
factor, in order to have a good acceptance rate.
We thought that you could maybe implement such an option in Montepython,
since it appears to be really efficient and does not seem very time
consuming. Of course this method has some limits, but it seems to me that
if you start for the bestfit of a complete global run and if your
distribution is well peaked, there should be no problem in using it.
On behalf of Yves Dirian:
Finally in order to find the maximum of the likelihood distribution, we realized that the method of taking only a small jumping factor and restart from the bestfit obtained with the global run was not efficient. Indeed by doing so the -Loglike effectively decreases a bit, but iterating the process leads each time to small decreases but at the end the method do not seem to converge really fast. This is obviously caused by the fact that the probability for accepting a new point is too big, and leads to a kind of dispersive behavior such as the one you can find in the attached picture wo_temp.png.
We therefore decreased this probability in adding a ''temperature'' parameter T such as the probability is now given by exp((Loglike(n) - Loglike(n-1))*(1/T)), in analogy with the Boltzmann factor, and the method seems much more appropriate for minimization procedures, as you can see in w_temp.png. For our runs we chose T=10^-2 and still 0.1 for the jumping factor, in order to have a good acceptance rate.
We thought that you could maybe implement such an option in Montepython, since it appears to be really efficient and does not seem very time consuming. Of course this method has some limits, but it seems to me that if you start for the bestfit of a complete global run and if your distribution is well peaked, there should be no problem in using it.