mstksg / uncertain

Manipulating numbers with inherent measurement/experimental uncertainty.
https://hackage.haskell.org/package/uncertain
BSD 3-Clause "New" or "Revised" License
25 stars 2 forks source link

Ignores Correlations #1

Closed barak closed 8 years ago

barak commented 10 years ago

This is a sort of subtle bug. Consider the following two expressions.

*Data.Uncertain> 10 `withPrecision` 2 - 10 `withPrecision` 2
0.0 +/- 1.
*Data.Uncertain> let x = 10 `withPrecision` 2 in x - x
0.0 +/- 1.

The first of these is giving the right answer, because the uncertainty of the two 10s is independent. But the second should give 0 +/- 0 because regardless of your uncertainty about its true value, x minus x is zero.

Tricky, eh?

This trick can also be used in the other direction, by the way, to get the system to underestimate rather than overestimate uncertainties. E.g., x*x*x*x*x.

mstksg commented 10 years ago

Thank you @barak

I do mention the "problem with x*x*x*x" in the documentation for Data.Uncertain; the semantics are meant to imply that every instance of x is an i.i.d. sample, which I think makes the most sense due to referential transparency:

let x = 10 `withPrecision` 2 in x - x

has to, by RT, be equivalent to

10 `withPrecision` 2 - 10 `withPrecision` 2

The "solution" I proposed was to use (*) and (**) for the case of correlated samples, and (*~) and (^) for uncorrelated samples.

It's a crude solution, i'll admit, as it only solves things like x*x*x*x (which is x^4) being different than x ** 3 (correlated, singly-sampled x). It can't quite handle x - x, unfortunately.

That being said I am pretty sure that these semantics (that x represents not a sampled value but a source of sampling) is not directly implied or stated at all in the documentation, and might actually be contradicted. This is because I sort of am just realizing it now :)

I'm welcome to any suggestions that would allow one to be able to properly express something like x - x where x is "sampled once", that would preserve referential transparency :)

barak commented 10 years ago

I'm welcome to any suggestions that would allow one to be able to properly express something like x - x where x is "sampled once", that would preserve referential transparency :)

Yeah, that's a tough one. There's a whole field, called Probabilistic Programming, that deals with it. Currently all the rage in Artificial Intelligence and Machine Learning.

In the context of Haskell, http://www.haskell.org/haskellwiki/Probabilistic_Functional_Programming, a natural thing to do is to have probability distributions live in a monad, in which case it's the monad's job to take care of propagating distributions. Here is an implementation in Scala: https://github.com/jliszka/probability-monad. There are many ways to do that (approximations). A pretty simple one is sampling: you represent a distribution not by a set of statistics, but rather by a set of samples. This makes it easy to keep correlations. On the other hand, you might need a lot of samples to get a good bound on the accuracy. Getting results of sufficient accuracy with a reasonable amount of computation for complicated high-dimensional distributions is a very difficult problem. It is NP-hard, so you might think it impossible, but our brains manage to do a good job for many distributions on interest, but we don't know how.

mstksg commented 9 years ago

It's been almost a year, but I think I've begun to hit on a solution with 9453e7a7fec that's different from the probability monad solutions. It's a monad that keeps track of variances and covariances and values as a State.

Example usage:

test :: Floating b => [Uncertain b]
test = getCorrelateds $ do
    x <- fromUncertain (3.24 +/- 0.01)
    (y, z) <- fromUncertain2 (11 +/- 4) (15 +/- 6) 0.9    -- 0.9 is the correlation coefficient
    return [x + y, x - y, x + z * y, (x * y) * z, x * (y * z), 2 * x, x + x]

the Correlated Monad keeps track of a covariance matrix and updates all of the covariances whenever a new value is added or requested. The "uncertainty" of x + x is preserved as if it was 2 * x, not two independent samples from identical distributions.

The example in the original post would be translated as:

test :: Floating b => [Uncertain b]
test = getCorrelateds $ do
    y <- fromUncertain (10 `withPrecision` 2)
    z <- fromUncertain (10 `withPrecision` 2)
    x <- fromUncertain (10 `withPrecision` 2)
    return [y - z, x - x]

The result of y - z should be 0 +/- 1 and the result of x - x should be 0 +/- 0...and you should get as a result

ghci> test
[0 +/- 1, 0 +/- 0]

As of now the implementation is not complete, as full back-propagation of the covariances and correlations has to be worked out with pen and paper. But so far I think this shows a lot of promise.

mstksg commented 8 years ago

With https://github.com/mstksg/uncertain/releases/tag/v0.2.0.0 , i implemented a "probability monad" type interface that kept track of correlations, but instead of using monte carlo sampling, uses ad to calculate uncertainties, taking into account correlations between values :) It's exported in the Data.Uncertain.Correlated module. I think, with the ad-based solution, I'd consider this issue addressed! :D

barak commented 8 years ago

Cute!