I thank reviewer 3 for raising this issue (rephrased by AHL):
scipy.optimize.curve_fit is used to carry out most of the minimisations.
Why isn't a more general minimiser used, i.e. scipy.optimize.minimize? Furthermore, why not use a global minimiser (e.g. scipy.optimize.differential_evolution), which is more powerful in real-life contexts as it is less likely to fall into local minima?
These general mimimisers would allow you the choice of minimising negative log-likelihood or negative log-posterior, instead of restricting you to a least squares situation where you have to recast all the log-priors into a sum of squares. Indeed, it's just easier to add log-probabilities for all the prior probability distributions, and it doesn't restrict you to a Gaussian prior, you could have whatever probability distribution you felt like.
I thank reviewer 3 for raising this issue (rephrased by AHL): scipy.optimize.curve_fit is used to carry out most of the minimisations.
Why isn't a more general minimiser used, i.e. scipy.optimize.minimize? Furthermore, why not use a global minimiser (e.g. scipy.optimize.differential_evolution), which is more powerful in real-life contexts as it is less likely to fall into local minima?
These general mimimisers would allow you the choice of minimising negative log-likelihood or negative log-posterior, instead of restricting you to a least squares situation where you have to recast all the log-priors into a sum of squares. Indeed, it's just easier to add log-probabilities for all the prior probability distributions, and it doesn't restrict you to a Gaussian prior, you could have whatever probability distribution you felt like.