Closed DavAug closed 4 years ago
Which situations?
The question to answer here is "is that general enough to go in PINTS?" It was always the idea that users can create their own likelihood functions
(If you were really brave, you could make a likelihood that adds two other likelihoods together, or even implement the plus, minus etc. operators for likelihoods, and then work out what that means for the S1 etc... :D) <-- maybe not!
I would argue that for time series problems (with no correlated noise or well separated measurements) a combined Gaussian error is general enough. I think in almost all measurement processes you have a lower limit of uncertainty which is why a base-level noise would be good to have. But then many measurement devices have an uncertainty that scales with the output which could be captured by a multiplicative Gaussian error.
I know that in the PKPD community such a combined error is quite extensively used.
Cool! Go for it!
In some situations it's beneficial to be able to specify an error model that combines a base-level Gaussian noise and a Gaussian noise that scales with the model output
y_pred = y + (a + b * y^eta) epsilon,
where epsilon is a standard gaussian random variable, and y the model predictions without noise.
In analogy to the existing GaussianLogLikelihood and the MultiplicativeGaussianLogLikelihood, we could introduce a CombinedGaussianLogLikelihood that implements the described error model.