lmfit / lmfit-py

Non-Linear Least Squares Minimization, with flexible Parameter settings, based on scipy.optimize, and with many additional classes and methods for curve fitting.
https://lmfit.github.io/lmfit-py/
Other
1.07k stars 275 forks source link

leastsq keyword passing #24

Closed rsuhada closed 11 years ago

rsuhada commented 12 years ago

Hi!

Thank you very much for lmfit. I discovered it just last week, but it seems it will be extremely useful!

I have however troubles with passing the keywords to leastsq - I hope I didn't missread something, but these are my attempts none of which works (the rest of the parameters passes correctly, fitting works, but for debugging I'd like it to stop after 2 evaluations):

leastsq_kws={'xtol': 1.0e-7, 'ftol': 1.0e-7, 'maxfev': 2}

# attempt 1
result = lm.minimize(myfunction,
                     pars,
                     args=nonfit_args,
                     **leastsq_kws
                     )

# attempt 2
result = lm.Minimizer(myfunction,
                         pars,
                         fcn_args=nonfit_args,
                         **leastsq_kws
                         )
result.leastsq()

# attempt 3
result = lm.Minimizer(myfunction,
                         pars,
                         fcn_args=nonfit_args
                         )
result.leastsq(**leastsq_kws)

Could you please clarify how to correctly pass maxfev etc.?

Thank you very much!

newville commented 12 years ago

The passing of keywords to scipy.optimize.leastsq() appears to be OK for me. For maxfev, setting this will restrict the number of evaluations of the objective function, but the actual number of evaluations may be as high as nvariables + maxfev. I believe it does nvariable+1 evaluations before even checking (assuming you're using numerical derivatives , but I think the situation may also hold for analytic derivatives). That is, if you set maxfev as low as 2, you're likely to see many more evaluations than that, but it should still be many less than if you do not set maxfev.

rsuhada commented 12 years ago

Dear Matt,

thank you very much for the quick answer and clarification. Before I wrote you I tested also the xtol option and it also seemed not to work. But as you say it seems there is an imposed minimum iterations no matter what the parameters are.

Thank you very much! Have a nice day! Robert

On Wed, Sep 12, 2012 at 6:26 PM, Matt Newville notifications@github.comwrote:

The passing of keywords to scipy.optimize.leastsq() appears to be OK for me. For maxfev, setting this will restrict the number of evaluations of the objective function, but the actual number of evaluations may be as high as nvariables + maxfev. I believe it does nvariable+1 evaluations before even checking (assuming you're using numerical derivatives , but I think the situation may also hold for analytic derivatives). That is, if you set maxfev as low as 2, you're likely to see many more evaluations than that, but it should still be many less than if you do not set maxfev.

— Reply to this email directly or view it on GitHubhttps://github.com/newville/lmfit-py/issues/24#issuecomment-8500486.

newville commented 12 years ago

On Thu, Sep 13, 2012 at 2:15 AM, rsuhada notifications@github.com wrote:

Dear Matt,

thank you very much for the quick answer and clarification. Before I wrote you I tested also the xtol option and it also seemed not to work. But as you say it seems there is an imposed minimum iterations no matter what the parameters are.

What didn't work when setting xtol? Again, setting all these options seems to work fine for me. Can you give a more detailed report of what you did and how you conclude that it did not work?

--Matt

rsuhada commented 12 years ago

Hi Matt,

sorry for the huge delay, but I was travelling and working on other projects.

Concerning the xtol/ftol from the last email I only meant that those seem to have also a certain minimal number of function calls just as maxfev - i.e. even if I set 'ftol' to 1.0e+77 it will carry out a few iterations even though the criterion is satisfied already by first iteration (for the function I'm testing now the minimal number of iterations is 14, I didn't explore whether this is universal). So there is no bug, I was just surprised that the fitter is not stopping immediately.

I have one more question - is there a simple way to plot 2d confidence contours? I can get the confidence intervals with conf_interval or map the probability surface with conf_interval2d, but what if I just want to see the 1/2/3 sigma ellipses not the full surface? Is the probability value corresponding to the specific confidence levels stored somewhere?

Thanks very much! Cheers, Robert

On Thu, Sep 13, 2012 at 4:04 PM, Matt Newville notifications@github.comwrote:

On Thu, Sep 13, 2012 at 2:15 AM, rsuhada notifications@github.com wrote:

Dear Matt,

thank you very much for the quick answer and clarification. Before I wrote you I tested also the xtol option and it also seemed not to work. But as you say it seems there is an imposed minimum iterations no matter what the parameters are.

What didn't work when setting xtol? Again, setting all these options seems to work fine for me. Can you give a more detailed report of what you did and how you conclude that it did not work?

--Matt

— Reply to this email directly or view it on GitHubhttps://github.com/newville/lmfit-py/issues/24#issuecomment-8529087.

newville commented 12 years ago

Hi Robert,

I have one more question - is there a simple way to plot 2d confidence contours? I can get the confidence intervals with conf_interval or map the probability surface with conf_interval2d, but what if I just want to see the 1/2/3 sigma ellipses not the full surface?

This isn't implemented exactly, though you can use the confidence map returned from conf_interval2d and view it as a contour map with matplotlib, I don't recall how easy it is to set the contour levels to be displayed, but it seems like its mostly a display issue. Still, I agree that making that easier would be a nice addition though.

Is the probability value corresponding to the specific confidence levels stored somewhere?

You mean like that sigma=1 corresponds to 68.3% and sigma=3 to 99.73%? That's from erf(sigma/sqrt(2)). It might be helpful if the map returned from conf_interval2d to be in values of sigma or even chi-square.

rsuhada commented 12 years ago

Is the probability value corresponding to the specific confidence levels

stored somewhere?

You mean like that sigma=1 corresponds to 68.3% and sigma=3 to 99.73%? That's from erf(sigma/sqrt(2)). It might be helpful if the map returned from conf_interval2d to be in values of sigma or even chi-square.

Thank you! Just to be completely sure: what conf_interval2d returns is 1 - erf(sigma/sqrt(2)), correct?

And thanks for the great support! Rather unusual with scientific software!