lmfit / lmfit-py

Non-Linear Least Squares Minimization, with flexible Parameter settings, based on scipy.optimize, and with many additional classes and methods for curve fitting.
https://lmfit.github.io/lmfit-py/
Other
1.07k stars 275 forks source link

Sketch for a stateless Model #107

Closed danielballan closed 10 years ago

danielballan commented 10 years ago

I think removing any state or reference to specific params from Model would actually be only a small change. Here's an sketch of how the API would shake out, expanding on remarks in #106.

result = model.fit(data, params)  # or keyword args, as now

# As now, the result stores certain useful copies of the params.
# These can be examined by the user or passed back into model.fit() later.
result.params
result.init_params

# Add this:
result.eval(x=x)  # any x, perhaps denser for prettier curve

model.eval()  # NOT VALID -- supplanted by `result.eval()`
model.eval(some_params)  # valid -- useful for exploring the model

The only downside of this is, as mentioned in #106, is less succinct auto-guessing. I kind of like this though -- it makes it clear what the model is using and makes it easy for the user to examine the auto-guessed parameters as needed.

guess = model.guess_starting_values(data)   # rename this method guess() for brevity?
model.fit(data, params=guess)

I agree we should take our time with this and collect input. I'm just jotting this down while it's on my mind.

newville commented 10 years ago

yes, those are all really good points, and that this is

result.eval(x=x)       # works, using result.params
model.eval(x=x)      # fails,  no params 
model.eval(x=x, params)   # works -- explicit

The result object from a model might need to be a more properly subclassed Minimizer() object, perhaps

 class ModelResult(Minimizer):

or maybe something else. Again, this needs more thought. I think that 0.8.0 is on indefinite hold until we figure out how this should work.

tacaswell commented 10 years ago

cc @licode

tritemio commented 10 years ago

Can't the model hold only a copy of the initialization parameters (init values and constrains) and never modify it during the fit?

In this way we have a true statless model with the bonus that we don't need to pass around the parameters. In this definition, a model really is defined by the composing functions and its constrains (that can be pretty complex and really define what a model is).

tritemio commented 10 years ago

Elaborating on these lines with an example:

model = GaussianModel()
model.set_param('center', 1, min=0, max=3)
...
model.init_params   # contains the initial parameter, with bounds and expressions

result1 = model.fit(data1, x=xdata)
result2 = model.fit(data2, x=xdata)

result1.params   # contains the parameters fitted on data1
result2.params   # contains the parameters fitted on data2

This way we are explicit and there is no conceptual confusion.

Also implementing this would be trivial, involving just renaming params to init_params and doing a deep copy of init_params in model.fit.

Finally I agree with @newville that

model.eval(x=x)      # fails,  no params

should fail. We should use instead:

results1.eval()       # uses same x as passed to model.fit
results1.eval(x=x)    # denser/wider x axis
model.eval(x=x, params=some_params)
tritemio commented 10 years ago

@danielballan , in your example:

model.eval(some_params)

where the independent variable should be taken from? I would rather forbid this (when no independent variable in passed).

danielballan commented 10 years ago

It sounds like we all agree that results.eval(x=x) is a useful addition and that model.eval(x=x) should fail. @tritemio, in my example I omitted x=x for simplicity. It is possible (but somewhat unusual) to define a Model with no independent variable, for which that code would and should be valid.

You raise a very good point about bounds. I would say they are sometimes but not always part of the model. I use bounds in two ways:

  1. to rule out physically impossible or meaningless values, such as a negative Gaussian width
  2. to guide a fit that is not converging properly, to place ad hoc bounds that serve a particular data set

In the model of (1) the bounds are part of the model. In mode (2) I would argue they are not. The other aspect of params -- the initial value -- varies from data set to data set.

Given all this, what would you think of making bounds part of Model, but not making the full init_params part of Model? Users could always pass parameters with stricter bounds to guide a difficult fit, but the model would enforce whatever constrains are part of its definition.

I don't have an API in mind for this. But, for the sake of argument, it might look something like the way guess_starting_values is handled currently.

newville commented 10 years ago

I've started playing with the removal of params from Model to make a Model stateless, and have made enough progress to say that I do think this is generally a good idea. I'll be pushing commits for this to the model_noparams branch.

This approach does have some implications for the implementation, but this seems to be giving better code. For example, there were changes I hadn't considered for composite models, but the creation of a composite mode is now better (and, if we end up deciding to stick with a Stateful Model, I think these changes can be back ported). It also gives a chance to fix up the monkey-patching of the fit result from Model.fit() into a real FitResult class -- again, this is almost certainly for the better. So it seems to lead to better code, but that's sort of independent of whether the idea is preferred.

For bounds and constraints that are implied by the model: I think @danielballan distinction is right. But I'll point out that It is also possible to have bounds or constraints defined inside the objective function (gaussian() could simply disallow sigma <= 0).

A strength of the Stateless Model is that it makes the creation of Parameters in user code explicit. One cannot rely on model.params owned by the model, one must create and name a Parameters object. The Model will generally be something that is opaque to the user -- one makes a Model and uses it. But the Parameters belong to the user. Yes, they'll be able to set and alter bounds anyway, but that's part of the point, I think.

If a model absolutely requires a bound or constraint, it could do so in the wrapped model function. Or, as with guess() it could suggest sensible defaults for values and bounds (and constraints if applicable). The user would be able to lift these, but that action would suggest that they knew (or thought they knew) what they were doing.

By the same token, it makes more sense to me to put init_param in the fit result, not in the Model. A model doesn't need to have any sense for the proper scale or range for the parameter values. It really just needs to coordinate the model function and parameter names, and provide an eval() and fit() method that compute the model given input parameters (and possibly data).

The approach I'm leaning toward would look like this:

peak = GaussianModel(prefix='peak_')
bkg = LinearModel(prefix='line_')

# make params for LinearModel
params =  bkg.make_params(intercept=y.min(), slope=0)

# guess params for GaussianModel from data and *add* to params
params += peak.guess(y, x=x, center=2.5) 

model = peak + bkg
out = model.fit(y, params, x=x) 
print(out.fit_report())

With out being a ModelFit -- a subclass of Minimize() that has params, init_params, init_fit, best_fit, and metods eval(), fit(), and fit_report(). I hope to be able to post some code tomorrow.

Tillsten commented 10 years ago

Hmm, i still prefer the version with the state. In the stateless model, there is almost no difference between the pure function and the model. While one can think of model and results as different objects, the combined view is also valid, in my opinion, e.g. a model is the combination of the function, the parameters and the values. The stateless model seems more clever then necessary, i also don't think that reusing models is such a common use case. Even than, copying or instancing new parameters is cheap and easy. So form me a -1 for the stateless model, but either way, i'll be happy to see models into lmfit.

edit: spelling.

Tillsten commented 10 years ago

Just to make my point more clear: we have to think about what makes a model practical and useful:

For me, the creation of the parameters belong to the model creation process, making it explicit defeats the purpose. This can see also in the example: note that in the stateless model the only difference between the instance and the class as the prefix, so why even use classes at all if we don't encapsulate state.

tritemio commented 10 years ago

I agree with @Tillsten the stateless implementation seems like a step-back from an API point of view compared to the current implementation (provided that model.params is renamed to model.init_params and a deep copy is always made in model.fit()).

Just a note: when I say "constrains" I mean both bounds and constrains that we set with the expr syntax. Both are very important to define a model.

Let me answer to @newville and @danielballan in 3 points. The following discussion refers to domain-specific models (model functions + domain-specific knowledge). But it can also applied to the Gaussian model (i.e. it is intrinsic that sigma must be > 0).

1. What is a "model"?

In my view, a domain-specific model should include generic initialization (if makes sense), the largest bounds physically possible (or no bounds), the needed constrains (or no constrains). In the specific case where the model does not need bounds or constrains it can be stateless, but not in general. A stateless implementation would require to split (and carry around) this logical model in two objects (model and parameters). I don't see the advantages. The only flaw in the current implementation is to having model.params instead of model.init_params.

2. Why a model logically needs constrains: an example

I recently implemented a composite model that has 2+Gaussian peaks + a bridge. The bridge is a kind of plateau that is > 0 between the 2 Gaussian peaks and has smooth transitions. By definition the bridge has some parameters that are equal (expr) to the corresponding Gaussian parameters. It turns out that using an lmfit composite models with gaussian functions, "bridge functions" and constrains makes trivially easy and elegant to create such a model (implementation). Without the expr constrains the model does not make sense at all.

3. The advantage of model including "constrains"

Let me bring another example. I'm fitting multi-channel data so I need to initialize the model once and then apply it to several similar datasets. I want to wrap lmfit in my software and make it convenient to use for this specific application, but I want to rely as much as possible on lmfit own API.

Right now a fitting session can be done like this:

# Create an lmfit Model with initialization, bounds and constrains
model = mfit.factory_two_gaussians(add_bridge=True) 

# Optionally tweak some parameters (unneeded most of the times)
model.set_param('p1_center', 0.2, min=-0.1, max=1.1)
model.set_param('p2_center', 0.8, min=-0.1, max=1.1)

# Fit a series of datatsets
fitter.fit_histogram(model)  # calls model.fit many times

With a stateless implementation what shoud I do? Probably this:

# Create an lmfit Model with initialization, bounds and constrains
model, params = mfit.factory_two_gaussians(add_bridge=True)  

# Optionally tweak some parameters (unneeded most of the times)
params.set_param('p1_center', 0.2, min=-0.1, max=1.1)
params.set_param('p2_center', 0.8, min=-0.1, max=1.1)

# Fit a series of datatsets
fitter.fit_histogram(model, params)  # calls model.fit many times

I fail to see the benefit of this added verbosity. I think the same point is valid by looking at @newville own example with the Gaussian + Linear model. In that case we use a model method to generate "factory" parameters with constrains (sigma > 0), so we are saying that the constrains are logically part of the model. But then we require the user to deal with two separated objects.

tacaswell commented 10 years ago

I think there is some talking-past-each other going on here.

The state that is trying to be avoided is the output from fitting. The model objects should not know if they have been run or not. I do not think that constraints are the type of 'state' being discussed.

That said, I think a case could be made for a composite high-level object which wraps up model, raw data, and fit results into one object. But that should be built on top of the stateless api and just stage-manage the other objects.

tritemio commented 10 years ago

@tacaswell, I maybe misread what @Tillsten said. My point is that initial values, bounds and constrains should be saved in the model instance, so that when you call model.fit() you don't need to pass an additional object that defines them. I wasn't talking about to put fit results in the model.

newville commented 10 years ago

Hi All, and thanks for all the really good discussion. I keep thinking we should be using a mailing list instead of Github Issues -- anyone else have an opinion on that?

Sorry for this being so long. The TL;DR is: I'm not 100% set on this, but I do think a parameter-less Model class is a good idea. BUT -- the result from Model.fit() should be an object that contain a full (and independent) state of the fit, including the ability to alter and re-run fits, and re-evaluate with modified inputs. I'm not sure I have a definitive resolution that can satisfy everyone.

I think there are 3 separate topics here:

a) why do we want Model, and what do we expect it to do? b) how much state should a Model contain? c) should a Model encapsulate constraints and bounds. If so, how?

I'll refer to "current" as the current master branch and "non-params" as the current version of the model_noparams branch.

Why do we want Model?

Basically, to make curve fitting easier and object oriented, and by using Parameters instead of plain variables.

This may seem obvious, but I think it helps clarify that we're aiming for easy to use. The target audience is "scipy users" and possibly the novice-side of that. Being much more complicated than a basic use of "curve_fit()" is not good, but being slightly more verbose is OK if it serves to enhance functionality and make harder things easier.

Also, Model has to be

I think both "current" an "non-params" are suitable for these. I think that "non-params" is a little bit more explicit and able to avoid the confusion that originally started this conversation.

How much State should a Model have?

To do a fit, one needs data, a parametrized model for that data. and a set of actual parameters to be varied. It's good to separate these so that we can do things like:

1. apply the same model to different data set.
2. apply different models to the same data set.
3. copy a set of parameters to a modified version, and re-do a fit, then compare the results.

In short, a fit needs a Model, a Parameters, and a set of data. The data is simply held in arrays (both "y" data and independent variables). Right now we expect that the user always passes these in (and keep them organized!), and we're not using some sort of Dataset class. The Parameters is pretty clear -- it's the ordered dict of Parameter objects.

The Model and it's relation to the Fit and the other components is a bit more murky. Some (overlapping) points:

  1. A fit has data, a model doesn't.
  2. A fit has parameters. A Model should know the names of the parameters, but not their values. It is reasonable to think that it knows something about the bounds and constraints on Parameters, but those can be set with Parameters too.... hmm....

So, a Model could have some state, but it cannot really hold the entire state -- you want to be able to update parameters, and apply the model to multiple datasets, for example. OTOH, a fit result can hold the entire state, and we definitely want this: a fit was done and you need to access the results and further evaluate this model. We need a ModelFitResult class that has the entire state, but can be used for re-evaluation and re-fitting.

If a Model doesn't hold all state (of the fit), is there much point in having any? As Till points out, having no state makes the Model little more than wrapping of the model function. A Model also knows the names of the parameters and independent variables (and prefix) and does have methods to generate Parameters for it. And it knows how to be part of a composite model. So, it may not seem like much, but there is quite a bit to the Model class.

In an abstract sense a model like "Gaussian + Line" doesn't need or really have initial values. More concretely, even with params in Model, the parameter are not initialized. The user must intervene to provide initial values for the parameters. In the current version, this can be as easy as:

mod = GaussianModel()
out = mod.fit(y, x=x, amplitude=5, center=2.5, sigma=1)

which compares very nicely to curve_fit(), though it only works if *all parameter are initialized In the non-params approach, this usage could be retained, with a Parameters created by fit() if not provided, though the more verbose

mod = GaussianModel()
pars = mod.make_params(amplitude=5, center=2.5, sigma=1)
# or:
pars = mod.guess(y, x=x)

out = mod.fit(y, pars, x=x)

might be preferred. In fact, it's possible to support both uses with the non-params version (fit has default of params=None for second arg, which causes

  params = self.make_params()

In the non-params version, the result params are held only in the result.

So, I guess I don't see why params (or init_params for that matter) should be in Model. I'm not sure there's really an ease-of-use case in having it -- the easiest way possible can still work. Once you want to try anything non-trivial, you need to interact with the parameters. I don't see a reason to prefer

 mod.set_param('sigma', min=0)
 out = mod.fit(y, x=x)

over

 pars.set('sigma', min=0)
 out = mod.fit(y, pars, x=x)

I lean toward Model not having a params member, and having a richer fit result that does have params and init_params.

Constraints / Bounds for Model Parameters

A particular model can definitely imply some parameter bounds or algebraic constraints on parameters. This could be accomplished several ways.

First, the model function could do this.... Then it's not available for the user to change.

Second, it could be lumped in with the user-specified constraints. This is certainly easy because the mechanism is in place. The guess() function can (and in cases, does) place bounds or constraints. The user has the ability to alter these, of course.

Third, they could somehow be set up in the Model class itself. This raises the question: If a model sets a bound on a parameter, can a user change that bound? Can a user set a more stringent bound, but not a more flexible one? I'm not sure how to implement this idea. You'd probably want to allow setting these at initialization time:

 def mypeak(x, amplitude=1, center=0, sigma=1):
     .....

mod = Model(mypeak,   amplitude=Parameter(10, min=0), ....)

or so.... seems awkward.

So, conceptually, I agree that it's a little murky whether some bounds belong to Model and others to Parameters. Practically, there are some outstanding issues to work out.... and I'm not sure how to do it in a way that is clear and easy to use.

Any suggestions on this? Is it OK to leave all this in Parameters -- and under the full control of the user?

newville commented 10 years ago

As I look at this more, perhaps passing 'param_name=Parameter()' to Model() isn't so awful. Here's a bit more on this idea. Does this seem like a reasonable way to set bound and constraints for a model? The idea would be to support:

def mypeak(x, amplitude, center, sigma):
     .....
mod = Model(mypeak,  amplitude=Parameter(10),
                     center=Parameter(50, min=0, max=100), 
                     sigma=Parameter(1, min=0),
                     fwhm=Parameter(expr='2.35*sigma') )

isn't so bad to provide initial, default parameter values and set bounds/constraints, and even add derived parameters.

In my view, this would still not generate a params object or even an init_params, just know to apply these attributes to any generated parameters with make_params(). For the sub-classed models like GaussianModel(), I would interpret

 mod = GaussianModel( amplitude=Parameter(-20, max=0),
                     center=Parameter(min=0, max=100), 
                     sigma=Parameter(0.1, min=0.005),
                     fwhm=Parameter(expr='2.35*sigma') )

as setting default values and mode-wide bounds and constraints. That is, mod.guess() might update the initial values, but wouldn't alter any non-None bounds -- accepting that those were set at definition time, and shouldn't be overwritten. It could modify unspecified (or infinite) bounds. This is sort of only a convention for how guess() would work, not enforced by Model.

In either case, the user would still be able to change the bounds/constraints on the Parameters. I'm not sure that we really want to prevent that. Is this reasonable?

FWIW, Parameter() needs a slight tweak so that setting a bound without a value works: x = Parameter(min=0.002) y = Parameter(max=1000) z = Parameter(expr='(x+y)/2')

Currently setting max only fails -- this will be fixed anyway!

tritemio commented 10 years ago

Yes, we need a mailing list.

Answering to @newville

This is sort of only a convention for how guess() would work, not enforced by Model.

+1

In either case, the user would still be able to change the bounds/constraints on the Parameters.

Yes, let's follow the rule of consenting adults.

After thinking more about it, I now see the advantage a completely stateless Model: it would allow to have a single reference to a model in the results that does not changes even if init_params are changed.

That's the workflow that I would like to have:

model = Model(mypeak)  # no initial values are optional

# 1. Get default parameters
init_params = model.default_parameters()  # Return a Parameters object

# 2. Or guess from data (getting constrains from default_parameters)
init_params = model.guess(data1)

# 3. Optionally tweak init_params
init_params.set_param('center', 0.5, min=0, max=1)

result1 = model.fit(data1, init_params)

# Optionally redo steps 1, 2 and/or 3 (or none of them)
...

result2 = model.fit(data2, init_params)

result1.model is result2.model  # same object
result1.init_params is not result2.init_params  # different objects

The default_parameters() method can return, depending on the model, a completely non-initialized set of parameters, partially initialized (some bounds/constrains) or completely initialized (i.e. for an application-specific model where everything is known).

A composite model can "compose" the default parameters returned by its children. But we need a way to change the default parameter too. For example a method set_default_parameters()? In this way we can build a completely new model by composing base models.

Finally, in the special case of models returning completely initialized default parameters, we could allow the shortcut:

model = ModelForDataX()
result = model.fit(data)

where under the hood fit uses the parameters returned by default_parameters() and saves them in result.init_params.

If some parameters are "initialized" in the model constructor (@newville example), this "initialization" is returned by default_parameter(). So, the default parameters can be changed either in the model constructor (used for base models) or with a special method (like set_default_parameters) typically used by composite models.

tritemio commented 10 years ago

An answering to @newville:

So, a Model could have some state, but it cannot really hold the entire state -- you want to be able to update parameters, and apply the model to multiple datasets, for example.

I agree. In my view of previous comment, the "state" held by the model is just the knowledge of how to generate a default/factory set of parameters. These will not change when fitting many datasets; what can change are the actual init_params fed to fit(). IF the user changes the default/factory parameters (either with a special method or from the constructor) [s]he is effectively creating a new model.

tritemio commented 10 years ago

@newville wrote:

a fit result can hold the entire state, and we definitely want this: a fit was done and you need to access the results and further evaluate this model. We need a ModelFitResult class that has the entire state, but can be used for re-evaluation and re-fitting.

If result object (returned by .fit()) contains:

  1. result.model: can be shared, contains no state (just the ability to generate defaults)
  2. result.init_params: the initial parameters (always a local copy)
  3. result.params: the fitted parameters

we have the full picture, we can re-evalute, re-fit ect...

newville commented 10 years ago

OK, to (try to) address the issue of a Model having parameter bounds or constraints, I've added (to the model_noparams branch) the ability to set "parameter hints", either at model creation time, or after a model is created. These will affect the parameters generated by Model.make_params().

For example, with:

def mypeak(amplitude=1, center=0, sigma=1):
      ....

one can do:

mod = Model(mypeak, sigma=Parameter(min=0, max=100))

to set min/max/expr or value. To do this for an existing model, one can do:

mod.set_param_hint('sigma', min=0, max=100)

And one can add new, derived params:

mode.set_param_hint('fullwidth', expr='2*sigma')

These parameter hints are held as a nested dictionary in model.param_hints which is used by mod.make_params(). In more detail, mod.make_params will do the followng steps

  1. create a parameter for each name in model.param_names
  2. apply default values from model function definition
  3. apply any value, min, max, expr that were defined in model.param_hints
  4. apply values passed into keyword arguments of model.make_params
  5. create parameters for the rest of the parameters in model.param_hints (such as fullwidth above
  6. merge in the parameters created by "other models" for composite models.

I think this gives many places in a Model to set default values, bounds, and constraints. The user can always change these after the parameters are created.

An important caveat (that might be left as too challenging to be easily addressed and so left as a documented difficulty) is that use of a Model prefix needs to be used in constraint expressions. That is with

 gmod = GaussianModel(prefix='g1_')

trying to set a constraint expression is slightly tricky:

  gmod.set_param_hint('fullwidth', expr='5*sigma')  # probably not what was meant

is acceptable as a parameter hint, but probably not what was meant, as it uses 'sigma' where 'g1_sigma' was probably intended. These both work:

  gmod.set_param_hint('fullwidth', expr='5*g1_sigma')   #  works
  gmod.set_param_hint('g1_fullwidth', expr=5*g1_sigma') # also works

The last two are both ok because it's easy to detect whether the parameter startswith() the prefix. For the first, it's not so easy for the create expression (one could walk the AST, but it's not clear that 'sigma` wouldn't be defined elsewhere....)

The subclassed models use this mechanism, for example to set a "fwhm" parameter where appropriate.

So, I think this should be acceptable to all. I need to update the docs for all the changes for model_no_params.

Does anyone have other concerns or comments about this? I'll try to complete docs for this by early next week, and call that 0.8.0-rc4, which I hope will be ready for release shortly after that.

danielballan commented 10 years ago

I like this solution very much. Good discussion. Sorry to disappearing for awhile there.

There is one bug I found.

In [47]: m.guess(data, x)
Out[47]: Parameters([('sigma', <Parameter 'sigma', 49.5, bounds=[0.0:None]>), ('center', <Parameter 'center', 49.019607843137258, bounds=[None:None]>), ('amplitude', <Parameter 'amplitude', 707.5251523723291, bounds=[None:None]>), ('fwhm', <Parameter 'fwhm', None, bounds=[None:None], expr='2.3548200*sigma'>)])

In [48]: m.guess(data)
Out[48]: (1.0, 0.0, 1.0)

The problem is here where guess_from_peak handles the special case x is None.

I only happened to stumble on this, and I worry that there are more bugs lurking. If you can wait about a week, I can write tests to cover these new ways of using Model. If everything checks out, I think we can release once that changes (and that pain point with prefix) are documented.

tritemio commented 10 years ago

By just reading your post all seems very reasonable, I like it.

I just have a comment: I would move point 6 for composite models before (at least) points 3, 4, 5. So a composite model first will "inherit" the "parameter hint" from the base models and then will optionally update some value/bound/expr according to the user input.

newville commented 10 years ago

@danielballan -- thanks and no problem disappearing ... This is all fairly fresh and I'm sure there are some rough edges on all of this, and that writing docs, tests, and examples will help here. I'll try to get rc4 out Monday or Tuesday. Maybe we should aim for a release of 0.8.0 around Oct 1?

@tritemio -- hmm, I hadn't really though about propagating parameter hints through a composite model. I sort of think keeping it simple and explicit is better, but I try to think about how inheriting parameter hints might work.

tritemio commented 10 years ago

@newville, even if you don't want to to inherit/propagate parameter hints I would make sure that merging the base-model parameters does not overwrites hints set explicitly after the composite model has been created.

But if you do want to propagate hints (naively) should not be difficult. It's just make_params() that first calls make_params() for each base model and merges the resulting parameters. Finally it applies any local hint directly set by the user on the composite model.

tritemio commented 10 years ago

@newville: Why in the new implementation did you removed components for composite models? I liked the components attribute: it made easy and expressive to plot both the composite model and the single components. Plus now .func points to the"first" base-model function. And .name has only the first name of the base-models.

newville commented 10 years ago

On Thu, Sep 4, 2014 at 4:09 PM, Antonino Ingargiola < notifications@github.com> wrote:

@newville https://github.com/newville: Why in the new implementation did you removed components for composite models? I liked the components attribute: it made easy and expressive to plot both the composite model and the single components. Plus now .func points to the"first" base-model function. And .name has only the first name of the base-models.

Yes, components was replaced _others (perhaps a worse name?). The version in master has sort of a different nastiness in that the composite model can't access the single component easily.

>>> from lmfit.models import GaussianModel, LinearModel, LorentzianModel
>>> loren = LorentzianModel(prefix='lor_')
>>> bkg = LinearModel(prefix='bkg_')
>>> gauss = GaussianModel(prefix='gau_')
>>> model = gauss + loren + bkg
>>> model
<lmfit.Model:

gaussian(prefix='gau')+lorentzian(prefix='lor')+linear(prefix='bkg_')>

model.name 'gaussian' model.others [<lmfit.Model: lorentzian(prefix='lor')>, <lmfit.Model: linear(prefix='bkg_')>] model is gauss False

So, now the composite model is a copy of gauss, and has the added models as _others. The Model.eval method and Model.make_params know to call eval and make_params for the other models and combine the results. This sort of fell out of the removal of params from Model, and makes the __add__(self, other) method much, much simpler than the version in master.

I think this is workable (doc needed, no doubt), but I'm open to other suggestions.

danielballan commented 10 years ago

I have no strong opinions on this, but I'll propose this way forward for 0.8.0: keep ._others private for now. Advanced users will be able to access it, but it won't be a part of the API that we are obliged to support going forward. Then, plan on reinstating .components in 0.8.1 once we resolve how provide access to the components in clear way.

If you disagree, though, I won't argue.

On Thursday, September 4, 2014, Matt Newville notifications@github.com wrote:

On Thu, Sep 4, 2014 at 4:09 PM, Antonino Ingargiola < notifications@github.com javascript:_e(%7B%7D,'cvml','notifications@github.com');> wrote:

@newville https://github.com/newville: Why in the new implementation did you removed components for composite models? I liked the components attribute: it made easy and expressive to plot both the composite model and the single components. Plus now .func points to the"first" base-model function. And .name has only the first name of the base-models.

Yes, components was replaced _others (perhaps a worse name?). The version in master has sort of a different nastiness in that the composite model can't access the single component easily.

from lmfit.models import GaussianModel, LinearModel, LorentzianModel loren = LorentzianModel(prefix='lor') bkg = LinearModel(prefix='bkg') gauss = GaussianModel(prefix='gau') model = gauss + loren + bkg model <lmfit.Model: gaussian(prefix='gau')+lorentzian(prefix='lor')+linear(prefix='bkg')> model.name 'gaussian' model.others [<lmfit.Model: lorentzian(prefix='lor')>, <lmfit.Model: linear(prefix='bkg_')>] model is gauss False

So, now the composite model is a copy of gauss, and has the added models as _others. The Model.eval method and Model.make_params know to call eval and make_params for the other models and combine the results. This sort of fell out of the removal of params from Model, and makes the __add__(self, other) method much, much simpler than the version in master.

I think this is workable (doc needed, no doubt), but I'm open to other suggestions.

— Reply to this email directly or view it on GitHub https://github.com/lmfit/lmfit-py/issues/107#issuecomment-54549162.

newville commented 10 years ago

@danielballan : That seems reasonable. I'm also OK waiting to get this to a point where we're all happy with the design. But I also feel like the conversation and number of suggestions might expand after wider release. So, putting something out soonish (rather than waiting several more months) seems like a good idea.

For "components" (and depending on what one is expecting, It might be that components = [self] + self._others

is enough.

The hard part is calculating individual components from a composite. In this approach in model_noparaams branch model holds the total while the items in model._others are the other individual components. Thus you have to do a subtraction to get the isolated first component. Probably not ideal. I suppose model.eval() could add a 'do not include others' option (easy to do) .

Basically, it seems like going beyond the current "copy-and-add-others" approach to making a composite would need a separate CompositeModel class that does little except hold and evaluate the components. I thought that wasn't needed, but that may have been mostly prideful laziness (oh cool, this trick works -- I don't need another class!) But maybe a separate CompositeModel would be more obvious.

tacaswell commented 10 years ago

One of the applications we need to support is fitting florescence spectra which (to my limited understanding) involves many peaks which are essentially Gaussian. Having the ability to build composite models scale to large numbers of components is key.

@licode can you comment (and/or correct me).

tritemio commented 10 years ago

I hacked up the components back in the Model class. Check it out:

https://github.com/tritemio/lmfit-py/tree/model_noparams

It's just a proof of concept.

What it does:

From the little testing I did seems to work but they may be corner cases...

If you like the idea I can polish it a little and send a PR.

newville commented 10 years ago

@tritemio -- that looks good. Yes on a PR when you think it's ready. I'll pull into model_noparams and we can go from there.

I'm not entirely sure on the propagation of "parameter hints" from composite to components. If one does composite.set_param_hint('sigma', min=0)

does that always propagate to all components? Should it?

Does the composite start with the parameter hints of component[0]? Would those then propagate to the other components? I'm not sure what the behavior should be. Not having (or ignoring any) "parameter hints" for a composite might be clearer.

But, maybe you've worked through all this and it's all OK. I'm not opposed to the idea, just cautious.

newville commented 10 years ago

@tacaswell @licode -- this is probably a separate topic (or, perhaps we can move this to the newly minted mailing list: http://groups.google.com/group/lmfit-py)

Gaussian peaks are indeed often used in a fit to X-ray fluorescence data, and constraining the peak widths and amplitudes can be useful (and physically meaningful). I do this often. In fact, lmfit is sort of a side project of synchrotron X-ray spectroscopy analysis tools: http://xraypy.github.io/xraylarch/ Currently, this can do some XRF analysis work (the docs are particularly sketchy here, whereas the code and docs for XAFS are more extensive), and more will happen soon.

The improved Model class for lmfit will be migrated to XrayLarch soon-ish, especially for XRF analysis. If you're interested, I also have a related Epics-specific live XRF display programs.

tritemio commented 10 years ago

@newville The logic is the following. A composite model delegates the creation of parameters to the sub-models, so they apply whatever hint they have. When the composite model has completed the merging of parameters from each model it applies additional hints (if any) set on the composite model (the dict params_hints in the composite model only stores its own hints not the sub-model ones). So a composite model can optionally add/modify parameters as needed (and override some of the sub-model defaults).

So, the hints on composite models are not propagated back to base models. There are no global hints, each model (either base or composite) only stores its own hints.

newville commented 10 years ago

Oh, OK. you say:

So, the hints on composite models are not propagated back to base models. There are 
no global hints, each model (either base or composite) only stores its own hints.

I thought you said earlier that param_hints on the composite could override the hints for the components.

But, let's just try it....

tritemio commented 10 years ago

Sorry for being unclear. Let me try again.

From a logical point of view: when calling make_params on a composite model, first the hints of any sub-model are applied, then the hints set on the composite model with set_params_hints are applied too (potentially overriding some sub-model hint).

Form an implementation point of view. Each sub-model stores only its own hints in the dict param_hints. A composite model (model) stores in model.param_hints only the hints explicitly set with set_param_hint on the composite model. When calling make_params, the composite model first calls make_params for each sub-model and then applies the additional hints in its own model.param_hints dict.

In order to avoid confusion, I'm thinking to make .param_hints private (._param_hints) since it should never be touched directly.

I'm not entirely sure on the propagation of "parameter hints" from composite to components. If one does composite.set_param_hint('sigma', min=0)

does that always propagate to all components? Should it?

If you have a parameter called sigma in the composite model then the hint is applied. But the hint is not applied to other parameters having sigma as basename (for example p1_sigma). You have to use the full name p1_sigma to apply hints on composite models.

The current .set_param_hints allows adding parameters, so, if you have p1_sigma (but not sigma) and you call composite.set_param_hint('sigma', min=0) then a new parameter sigma is created. This was the behavior in model_noparams before my patch and I didn't changed it.

newville commented 10 years ago

On Fri, Sep 5, 2014 at 12:29 PM, Antonino Ingargiola < notifications@github.com> wrote:

Sorry for being unclear. Let me try again.

From a logical point of view: when calling make_params on a composite model, first the hints of any sub-model are applied, then the hints set on the composite model with set_params_hints are applied too (potentially overriding some sub-model hint).

Oh, sorry I thought you were saying approximately the opposite. I think it would be much better to be concrete about what does happen. Like what happens if someone does

 m1 = SomeModel(prefix='m1_')
 m1.set_param_hint('thing', min=1)

 m2 = AnotherModel(prefix='m2_')
 m2.set_param_hint('other', min=-1, max=1)

 m3 = GaussianModel(prefix='m3_')
 m3.set_param_hint('sigma', min=0.003)

 m4 = GaussianModel()  # no prefix
 m4.set_param_hint('sigma', min=0.3)

 m = m1 + m2 + m3 + m4
 m.set_param_hint('sigma', min=2.0)

 pars = m.make_params()

Like, suppose there is a sigma in m2 but not in m1, in m1 but not in m2, in both m1 and m2? Is sigma updated for m4? For m3? What *should this do? Why?

I think my preference (not strong) may be to ignore the parameter hints set for the composite m and insist that parameter hints be set for the components.

Form an implementation point of view. Each sub-model stores only its own

hints in the dict param_hints. A composite model (model) stores in model.param_hints only the hints explicitly set with set_param_hint on the composite model.

When calling make_params, the composite model first calls make_params for each sub-model and then applies the additional hints in its own model.param_hints dict.

In order to avoid confusion, I'm thinking to make .param_hints private ( ._param_hints) since it should never be touched directly.

Hmm, I'm not sure that changes the potential for confusion. I don't think that's the source of confusion.

I'm not entirely sure on the propagation of "parameter hints" from

composite to components. If one does composite.set_param_hint('sigma', min=0)

does that always propagate to all components? Should it?

If you have a parameter called sigma in the composite model then the hint is applied.

Hmm, and if the composite doesn't have a sigma parameter, but one of the components does?

But the hint is not applied to other parameters having sigma as basename (for example p1_sigma). You have to use the full name p1_sigma to apply hints on composite models.

Is that by design? Does it mean propagation depends whether a model has a prefix or not?

The current .set_param_hints allows adding parameters, so, if you have p1_sigma (but not sigma) and you call composite.set_param_hint('sigma', min=0) then a new parameter sigma is created. This was the behavior in model_noparams before my patch and I didn't changed it.

Sometimes a model wants to define a new parameter such as 'fwhm' or 'midpoint'.

Anyway, I think we should just try this out and see if it is workable...

licode commented 10 years ago

Thanks for the recommendation on fluorescence analysis. I think the task is actually straightforward, we have multiple gaussian functions plus others, such as compton, elastic peaks and so on. So composite model is definitely a good option for the model construction.

Per the discussion about the stateless model here, is it possible for us to do the following ?

m = m1 + m2 + m3 + m4 m.set_param_hint(m1, 'sigma', min=2.0)

So it is clear that we can define which sub model to change. Or we can do multiple changes as

m.set_param_hint(m1, m2, 'sigma', min=2.0)

Tillsten commented 10 years ago

OT fitting fluorescence: While the Gaussian is the most commonly used line-shape for fitting fluorescence spectra, it is the wrong model in a lot cases, e.g. red edge effect. For fitting complicated spectra which are a sum of components - f(x, \psi) = \sum_i c_i * f(x, \psi) - it can be helpful to only fit the non-linear parameters with levenberg. With them you build a normalized basis a than solve the linear least sqaures problem to calculate the coefficients. These kind of problem are often called separable nonlinear least sqaures or varpro . There are formulas for calculating the errors of the coefficients and using the Jacobian. It is quite easy to do with python, if someone is interested i can share the code.

tacaswell commented 10 years ago

@Tillsten We at @NSLS-II are definitely interested. We can move this discussion someplace more sensible if you want.

licode commented 10 years ago

@Tillsten, thanks for your suggestions. Gaussian function is only a very simple example. And we did implement functions to deal with much realistic cases, such as considering different tail functions for K beta vs K alpha lines. I will add you in another new issue not to deflect the main topic here. thanks!

tritemio commented 10 years ago

@newville I don't see the confusion. Taking your example:

 m1 = SomeModel(prefix='m1_')
 m1.set_param_hint('thing', min=1)

 m2 = AnotherModel(prefix='m2_')
 m2.set_param_hint('other', min=-1, max=1)

 m3 = GaussianModel(prefix='m3_')
 m3.set_param_hint('sigma', min=0.003)

 m4 = GaussianModel()  # no prefix
 m4.set_param_hint('sigma', min=0.3)

 m = m1 + m2 + m3 + m4
 m.set_param_hint('sigma', min=2.0)

 pars = m.make_params()

Forget for a second the hints. What parameters does m has?

m1_thing, m2_other, m3_sigma, sigma

Before the call m.set_param_hint('sigma', min=2.0) the hint for sigma are inherited from the sub-model m4 so they are min=0.003. After calling m.set_param_hint('sigma', min=2.0) the hints for sigma become min=0.2 as requested. But, if that was your concern, the object m4 is not modified.

The bottom line is, in composite models, prefix are never neglected. If you say sigma you mean sigma, not all the params ending with sigma.

This could be a piece of docs:

When calling the set_param_hints method is in general advisable to use the full name for the parameter (including the eventual prefix), however, in base models, is also possible to drop the prefix for brevity. In composite models set_param_hints must always be called with the full parameter name (with prefix if present).

tritemio commented 10 years ago

@licode ATM your example:

m = m1 + m2 + m3 + m4
m.set_param_hint(m1, 'sigma', min=2.0)

will not work. Since you have many components with the same sigma parameter you already need to set a prefix to differentiate between the sub-components. Let say the prefix are p1, p2, p3_ for m1, m2 m3 and not prefix for m4. Then your example spells like this:

m.set_param_hint('p1_sigma', min=2.0)

and to set m4's sigma:

m.set_param_hint('sigma', min=2.0)

Currently set_param_hint does not allow any sort of broadcasting. That would be confusing I think. Also consider that hints are only a "default" starting point. When during a fit you need to adjust bounds or initial values you should do it operating on the Parameters object returned by make_parameters(), not by setting hints. Said that, you can create a "custom" base model with specific hints and use it multiple times in a composite model. To do that you either subclass a Model or write a "factory" function (that creates the models and add your hints). See here for an example of factory function.

newville commented 10 years ago

Sorry, my confusion was about if (or how) parameter hints for a composite were propagated to component models. It's clear now that they are not automatically propagated, but that any parameter hint can be set from the composite with a "full parameter name". I think this is fine, it just needs to be clear and documented.

FWIW, I made a slight change to your version of Model.add() so that there is less "copying". This means that now:

>>> m1 = SomeModel()
>>> m2 = AnotherModel()
>>> total = m1 + m2
>>> total.components[0] is m1, total.components[1] is m2
(True, True)

so that one can adjust the component models either as "m1" and "m2" or as "total.components[i]".

On Sat, Sep 6, 2014 at 5:54 PM, Antonino Ingargiola < notifications@github.com> wrote:

@newville https://github.com/newville I don't see the confusion. Taking your example:

m1 = SomeModel(prefix='m1_') m1.set_param_hint('thing', min=1)

m2 = AnotherModel(prefix='m2_') m2.set_param_hint('other', min=-1, max=1)

m3 = GaussianModel(prefix='m3_') m3.set_param_hint('sigma', min=0.003)

m4 = GaussianModel() # no prefix m4.set_param_hint('sigma', min=0.3)

m = m1 + m2 + m3 + m4 m.set_param_hint('sigma', min=2.0)

pars = m.make_params()

Forget for a second the hints. What parameters does m has?

m1_thing, m2_other, m3_sigma, sigma

Before the call m.set_param_hint('sigma', min=2.0) the hint for sigma are inherited from the sub-model m4 so they are min=0.003. After calling m.set_param_hint('sigma', min=2.0) the hints for sigma become min=0.2 as requested. But, if that was your concern, the object m4 is not modified.

The bottom line is, in composite models, prefix are never neglected. If you say sigma you mean sigma, not all the params ending with sigma.

This could be a piece of docs:

When calling the set_param_hints method is in general advisable to use the full name for the parameter (including the eventual prefix), however, in base models, is also possible to drop the prefix for brevity. In composite models set_param_hints must always be called with the full parameter name (with prefix if present).

— Reply to this email directly or view it on GitHub https://github.com/lmfit/lmfit-py/issues/107#issuecomment-54731229.

--Matt Newville 630-252-0431

newville commented 10 years ago

@tacaswell @licode @tillsten -- for XRF analysis: yes, Gaussians are not perfect for XRF but work reasonably well at high count rate with modern detectors. And, assuming that matrix effects can be accounted for well enough, the problem can be linearized. GeoPIXE (in IDL, restricted license) and PyMCA (https://github.com/vasole/pymca) has many of the subtleties worked out.

tritemio commented 10 years ago

@newville

FWIW, I made a slight change to your version of Model.add() so that there is less "copying". This means that now: m1 = SomeModel() m2 = AnotherModel() total = m1 + m2 total.components[0] is m1, total.components[1] is m2 (True, True)

I saw it and I think it's ok. But probably we need now to add a Model.copy()method (that was the point the started the discussion). Let say we create a base model with some hints and we want to use it multiple times in a composite model. ATM we need an helper function (factory function) or subclassing Model to achieve that. But we could envision doing something like:

mypeak = GaussianModel(prefix='p1_')
mypeak.set_param_hint('center', value=0.5, min=0, max=1)

manypeaks = mypeak + mypeak.copy().set_prefix('p2_') + mypeak.copy().set_prefix('p3_')
newville commented 10 years ago

@tritemio : Good point. It is true that one cannot currently copy a Model and change its prefix. In fact, one cannot easily change a prefix after a model is created (one has to run _parse_params() which was not meant to be part of the public API).

>>> p1  = GaussianModel(prefix='p1_')
>>> p2 = copy(p1)
>>> p2.prefix = 'a_new_preak'
>>> p2.param_names
set(['p1_amplitude', 'p1_sigma', 'p1_center'])

So, I'd say there is definitely a need for a set_prefix() method that reset the prefix and re-ran _parse_params().

I'm not sure we need a copy() method. Copying a composites with deepcopy doex makes copies of all components:

>>> import copy
>>> from lmfit.models impot GaussianModel, LinearModel
>>> g1 = GaussianModel()
>>> l1 = LinearModel()
>>> m = g1 + l1
>>> w = deepcopy(m)
>>> [id(x) for x in (g1, l1)]
[4388293456, 4388293520]
>>> [id(x) for x in m.components]
[4388293456, 4388293520]
>>> [id(x) for x in w.components]
[4388293776, 4388293840]

If we add set_prefix(), is that good enough? Do you see a need for Model.copy() beyond fixing the prefix issue?

tritemio commented 10 years ago

Now I know that calling deepcopy is "the" way to copy a model. But providing a simple method that just returns deepcopy(self) can avoid any confusion for a new user and it is clean idiomatic API.

Using deepcopy and the prefix setter, my previous (3-lines) example becomes:

from copy import deepcopy

mypeak = GaussianModel(prefix='p1_')
mypeak.set_param_hint('center', value=0.5, min=0, max=1)

peak2 = deepcopy(mypeak)
peak2.prefix = 'p2_'

peak3 = deepcopy(mypeak)
peak3.prefix = 'p3_'

manypeaks = mypeak + peak2 + peak3

Now that I think about it, copying and changing prefix are action often performed together. Why don't we provide a copy method that can also change prefix? The same example becomes:

mypeak = GaussianModel(prefix='p1_')
mypeak.set_param_hint('center', value=0.5, min=0, max=1)

peak2 = mypeak.copy(prefix='p2_')
peak3 = mypeak.copy(prefix='p3_')
manypeaks = mypeak + peak2 + peak3

Concise and expressive. What do you thing?

newville commented 10 years ago

Hmm,

On Sun, Sep 7, 2014 at 8:37 PM, Antonino Ingargiola < notifications@github.com> wrote:

Now I know that calling deepcopy is "the" way to copy a model. But providing a simple method that just returns deepcopy(self) can avoid any confusion for a new user and it is clean idiomatic API.

With the prefix setter, my previous example can be written as:

from copy import deepcopy

mypeak = GaussianModel(prefix='p1_') mypeak.set_param_hint('center', value=0.5, min=0, max=1)

peak2 = deepcopy(mypeak) peak2.prefix = 'p2_'

peak3 = deepcopy(mypeak) peak3.prefix = 'p3_'

manypeaks = mypeak + peak2 + peak3

Now that I think about it, copying and changing prefix are action often performed together. Why don't providing a copy method that can also change prefix. The same example becomes:

mypeak = GaussianModel(prefix='p1_') mypeak.set_param_hint('center', value=0.5, min=0, max=1)

peak2 = mypeak.copy(prefix='p2') peak3 = mypeak.copy(prefix='p3') manypeaks = mypeak + peak2 + peak3

Concise and expressive. What do you thing?

I'm not sure how often this will be used. Given the choice of "make another Gaussian peak" or "make a copy of peakX with all its parameter hints", which is the more obvious and common approach? I think I would expect that "make another Gaussian" was more common, but I could be wrong. Is it OK to delay this until its clearly shown to be necessary?

tritemio commented 10 years ago

It saves the time to apply the hints, that can be many. But OK, let's wait if somebody else would use it.

danielballan commented 10 years ago

FWIW, I am +1 on adding a copy method even if it is just a thin wrapper to deepcopy. Along the lines of what @tritemio said, if an object has it's own copy method then I know I can trust it. No confusion.

On Sunday, September 7, 2014, Antonino Ingargiola <notifications@github.com javascript:_e(%7B%7D,'cvml','notifications@github.com');> wrote:

It saves the time to apply the hints, that can be many. But OK, let's wait if somebody else would use it.

— Reply to this email directly or view it on GitHub https://github.com/lmfit/lmfit-py/issues/107#issuecomment-54771117.

newville commented 10 years ago

OK, I'm willing to be out-voted on the Model.copy() method. If either of you wants to do this, I'm OK with it.

Related: I started to re-work the docs for 0.8.0 this evening. @danielballan and @tritemio should be listed as authors. I see Daniel's affiliation as Johns Hopkins, but I don't know Antonino's. Can you send this to me? I'd like to get a zenodo DOI link for 0.8.0 when it's ready. I know it can be tough getting credit to work on software, an I'd like to make sure you two get most of the credit for this work.

danielballan commented 10 years ago

+1 for a zenodo DOI. Thanks!