dokester / BayesicFitting

Bayesian fitting package
GNU General Public License v3.0
51 stars 8 forks source link

Is a 4 parameter logistic model possible? #22

Closed genhayes closed 1 month ago

genhayes commented 7 months ago

Hello! For my research I'm trying to using a Bayesian approach to fit my dataset with a sigmoid model and have had success using your 3-parameter logistic function (LogisticModel()). I'd like to add a parameter to allow for an asymmetric sigmoid by raising the denominator to the power of p_4:

y = p_1 / ( 1 + exp( ( p_2 - x )/p_3 ) )**p_4

Is there a way to alter the 3-parameter LogisticModel() to incorporate this or otherwise create a new model class that has a parameter for asymmetry?

Thank you for you help!

dokester commented 7 months ago

Dear Genevieve,

You can do that by piping the LogisticModel through a PowerLawModel. A PowerLawModel contains the function f(x:p) = p_0 * ( x - p_1 ) ** p_2 It has 3 parameters but you can fix p_0 at 1.0 and p_1 at 0.0 by defining

m1 = PowerLawModel( fixed={0:1.0, 1:0.0} )

Now we have the function f(x:p) = x**p_0. If we now replace the x by the result of the LogisticModel, we have what we want. So we combine the 2 models using a pipe, |, which feeds the output of the left hand model to the input of the right side model.

m2 = LogisticModel()
model = m2 | m1

I think that will work. Otherwise come back to me.

Success Do Kester.

genhayes commented 7 months ago

Brilliant thank you, this worked for me!

One other quick question, when defining a gaussian prior with a mean and std, and min/max limits on the parameter, am I correct in thinking that this is defined by: model.setPrior( parameter_number , prior=GaussPrior(center=mean, scale=std, limits=[min, max])) I just want to make sure that I'm interpreting the center and scale correctly.

Thank you so much for your help and fast response.

dokester commented 7 months ago

Yes, your definition of a GaussPrior is OK. One remark, however, on GaussPriors. They don't need limits, in theory. In practice, a GP does not function outside [-8,+8]std. Due to computational reasons, the result of a GP is a hard zero there. So your parameters can not move outside of [-8,+8]std.

Do

genhayes commented 7 months ago

Okay perfect, I think I've got it working now! I've used both Levenberg-Marquardt fitter and the Nester-Sampler method which return similar parameter mean values but the L-M method returns very high standard deviations - do you know why that might be the case?

These are what my outputs look like from both fitters:

image

Thank you again for your help!

dokester commented 7 months ago

Hi Genevieve,

It surprises me. Looking at the results I cannot say what is happening. But I would trust the NestedSampler results better.

Did you look at the fit in a plot. Eg. by adding the keyword plot=True to both the fit method of LMF and the sample method of NS.

pars = lmf.fit( ydata, plot=True )
logE = ns.sample( plot=True )

Convince yourself the fit is what you want it to be.

You can also take a look at the documentation file: BayesicFitting/docs/troubles.md where I collected common gotchas, restrictions, errors and mistakes, mostly made by myself.

I was also wondering what p4 would be. p0, p1 and p2 are from the LogisticModel and p3 from the PowerLawModel. So what is p4. As you can read in the troubles.md file, degeneracy of the model is a big issue, especially for LMF which relies on matrix inversion. NS is less affected.

If that all does not help, I would have to take a look at your data set to see what is going on. Don't worry about my time; I am pensioned so I have enough of it.

Do

genhayes commented 7 months ago

Okay, interestingly fitter.monteCarloError() is also returning Nan for the Levenberg-Marquardt method, however I think the Nested Sampler is generally working a little better across my different datasets. They are a similar fit though:

LM fitter image

Nested Sampler image

I think I will stick with Nester Sampler method for now, is there a chi-squared or confidence region to compare model fits or would I be best to stick with residuals?

Thank you so much, this toolbox is so helpful for implementing bayesian fitting!

dokester commented 7 months ago

Hi Genevieve,

Looking at your data I am not much surprised that LMF and NS yield different sets of parameters while the actual plot is very much the same. It signals that your model parameters are quite correlated under this data set. There is not much curvature in the data to define the parameters better.

Unless you have very good (theoretical) reason the fit this 5-parameters model, I would try something simpler; eg. without the power law and/or without the polynomial. See what the evidence says about a simpler model.

You can get 1-sigma confidence regions from the SampleList, obtained after running NestedSampler

logE = ns.sample( plot=True )
samplelist = ns.samples
confregion = samplelist.monteCarloError( xdata )

for any array of inputs in xdata.

The red dots in your ns-plot are random draws from the posterior. The width of the dots is also a good indication of the quality of the fit.

Thanks for using my toolbox and spread the news.

Do.

genhayes commented 7 months ago

Hi Do,

I apologize for my slow response, we are collecting more data now and I'm hoping to apply these methods to that as well.

These are physiological measures of blood flow, so we have good theoretical backing for the shape of the model but it is unfortunate that the data might not be covering a large enough dynamic range. The good news is that the bayesian fitting with priors is performing much better than a least squares regression fit of the model. We are hoping the new data will be more representative, I will keep you updated on the model fitting goes for that.

Thanks again, my whole lab is hearing all about BayesicFitting these days!

dokester commented 7 months ago

I would like to hear and see about your new data and about the model fitting.

From experience I can say that it is always better to use a theoretically sound model than something invented at hoc.

I will keep this issue open for now, although there is not much for me to do.

Do Kester

dokester commented 1 month ago

I think we can close this discussion.