mljs / levenberg-marquardt

Curve fitting method in JavaScript
MIT License
71 stars 15 forks source link

Uncertainties / weights request. #21

Closed ricleal closed 6 years ago

ricleal commented 6 years ago

I would be nice to have the uncertainties (e.g. error of the measurement) or weights used as part of the fitting.

Lmfit (https://lmfit.github.io/lmfit-py/model.html) uses it as weights. So when you fit you data you pass weights=.... Usually I use the error of the measurement to calculate my weight. weights=1/(y_err)).

For example, data could be like this where e is the error associated to the measurement y:

var data = {
  x : [
    0.005, 0.01,  0.015, 0.02,  0.025, 0.03,  0.035, 0.04,  0.045,
    0.05,  0.055, 0.06,  0.065, 0.07,  0.075, 0.08,  0.085, 0.09,
    0.095, 0.1,   0.105, 0.11,  0.115, 0.12,  0.125, 0.13,  0.135,
    0.14,  0.145, 0.15,  0.155, 0.16,  0.165, 0.17,  0.175
  ],
  y : [
    1.059053, 0.820483, 0.889606, 0.695512, 0.578699, 0.527058, 0.541055,
    0.585198, 0.484689, 0.513663, 0.412635, 0.380731, 0.351293, 0.324131,
    0.29907,  0.271001, 0.25461,  0.234924, 0.21676,  0.216298, 0.184536,
    0.177404, 0.145856, 0.124078, 0.121299, 0.123407, 0.126478, 0.110142,
    0.096938, 0.091407, 0.086616, 0.076146, 0.072392, 0.071292, 0.060748
  ],
  e :[1.02910301, 0.90580517, 0.94318927, 0.83397362, 0.76072268,
       0.7259876 , 0.73556441, 0.76498235, 0.69619609, 0.71670287,
       0.64236672, 0.61703403, 0.59269976, 0.56932504, 0.54687293,
       0.52057756, 0.50458894, 0.48468959, 0.46557491, 0.46507849,
       0.42957654, 0.42119354, 0.38191098, 0.35224707, 0.34828006,
       0.35129332, 0.35563746, 0.33187648, 0.31134868, 0.30233591,
       0.29430596, 0.27594565, 0.26905761, 0.26700562, 0.24647109]
};

let result = levenbergMarquardt(data, <fit function>, <options>);

Just an idea for the future.

targos commented 6 years ago

Seems to make sense. /cc @jacobq @m93a how does that relate to the other open issues/PR ?

ricleal commented 6 years ago

I think this relates to #17 . I did not see that issue before..

jacobq commented 6 years ago

Am I understanding correctly that the goal of this proposed feature is to allow some portions of the data to be prioritized over others? For example, if the data being used for fitting was sampled/measured and has some available estimate of uncertainty/error (e.g. Standard Error) then the algorithm should place more emphasis on fitting the points which have higher precision. In other words, we are mainly wanting the fitted result to be within the "error bars" associated with each value.

Edit: If this is the case, then I am pretty sure this is a duplicate, as mentioned above. (Sorry for suggesting that you open a new issue -- I didn't realize what you were asking.)

ricleal commented 6 years ago

@jacobq That's it! My fault. I should have seen first the existing issues. I'm closing this one then.