jonathf / chaospy

Chaospy - Toolbox for performing uncertainty quantification.
https://chaospy.readthedocs.io/
MIT License
447 stars 87 forks source link

problem with point collocation for log-normal distribution #110

Closed samira-m94 closed 5 years ago

samira-m94 commented 6 years ago

I am running the following code in order to find the approximation for my function based on different number of sample points and then find the first four central moments of the output. The variables with index 1 indicates inputs directly sampled from Lognormal distribution and the indices 2 refer to the input sampled from uniform distribution then transformed to lognormal distribution. as I use directly lognormal distribution the third and fourth moments are wrong and increase with the increase of sample points, whereas using uniform and transforming it to lognormal gives correct values so as far as I could say there is something wrong with approximation but still do not know if I am doing this correct or not.

import chaospy as cp
from scipy import stats
import numpy as np
def kirby_function(x):
    b = [0.9756, 0.2811, 0.9424, 0.0772, 0.8096]
    y=(b[0]+b[1]*x+b[2]*x**2)/(1+b[3]*x+b[4]*x**2)
    return y
m = 1000
while m <= 1000000:
    d1 = cp.Lognormal(1.5, 0.37)
    d2 = cp.Unifrom(0, 1)

    samples1 = d1.sample(m-1, rule="S")
    samples2 = d2.sample(m-1, rule="S")

    solves1 = [kirby_function(sample) for sample in samples1.T]
    solves2 = [kirby_function(stats.lognorm.ppf(stats.unifrom.cdf(sample,0,1), 0.37, loc=0, scale=np.exp(1.5))) for sample in samples2.T]
    expansion1 = cp.orth_ttr(5, d1)
    expansion2 = cp.orth_ttr(5, d2)
    approx1 = cp.fit_regression(expansion1, samples1, solves1)
    approx2 = cp.fit_regression(expansion2, samples2, solves2)
    M1 = []
    M2 = []
    for n in range(4):
        B1 = cp.E(approx1 ** (n + 1), d1)
        M1.append(B1)
        B2 = cp.E(approx2 ** (n + 1), d2)
        M2.append(B2)

    miu = M1[0]
    std = abs(M1[1] - miu ** 2)**0.5
    skw = (M1[2] - 3 * miu * std ** 2 - miu ** 3)/std ** 3
    kurt = (M1[3] - 4 * miu * M1[2] + 6 * (miu ** 2) * M1[1] - 3 * miu ** 4)/std ** 4
    print(miu, std, skw, kurt)

    miu2 = M2[0]
    std2 = abs(M2[1] - miu2 ** 2)**0.5
    skw2 = (M2[2] - 3 * miu2 * std2 ** 2 - miu2 ** 3)/std2 ** 3
    kurt2 = (M2[3] - 4 * miu2 * M2[2] + 6 * (miu2 ** 2) * M2[1] - 3 * miu2 ** 4)/std2 ** 4
    print(miu2, std2, skw2, kurt2)
    m+=1000

By the way as I do the same procedure for an other simple function like x^2 the results are vice-versa the first one gives correct moments in compare to the second way. I am using python 3.6

jonathf commented 6 years ago

Using Lognormal distribution as a basis for an expansion is not recommend. It is a trouble maker because of its heavy tail. A direct approach is therefore likely to be unstable.

The second problem is that the mapping between Uniform and Log-normal is non-smooth, meaning that you add a lot of noise by using Uniform as a proxy. The proper way is to a distribution that results in a smooth map. In the case of Log-normal, the obvious choice is to use a Normal (Gaussian) distribution.

And lastly the third problem, which is more chaospy specific: The implementation of Sobol sequence samples get exhausted kind of quickly, making the samples less workable at the scales you are using them. I recommend using a method like Hammerslay M or Halton H which don't have this problem. (Their seed reset each time you call .sample.)

I've taken the liberty to modify your code a bit:

import chaospy as cp
import numpy as np

def kirby_function(x):
    b = [0.9756, 0.2811, 0.9424, 0.0772, 0.8096]
    y = (b[0]+b[1]*x+b[2]*x**2)/(1+b[3]*x+b[4]*x**2)
    return y

d1 = cp.Lognormal(1.5, 0.37)
d2 = cp.Normal(1.5, 0.37)

m = 1000
while m <= 1000000:

    expansion = cp.orth_ttr(5, d2)
    samples = d2.sample(m-1, rule="H")
    samples_mapped = d1.inv(d2.fwd(samples))
    solves = [kirby_function(sample) for sample in samples_mapped.T]
    approx = cp.fit_regression(expansion, samples.T, solves)
    print(m)
    print(cp.E(approx, d), cp.Std(approx, d), cp.Skew(approx, d), cp.Kurt(approx, d, fisher=False))

    m += 1000

This should be the correct and stable way of solving your problem.

samira-m94 commented 6 years ago

Thanks for your complete response. Just one more thing is that with functions like polynomials or x^n n being any integer using lognormal directly gives much accurate moments rather than using normal and transforming it to lognormal, so even in these cases lognormal is not recommended?

And other question is with the fact you mentioned the seed changes each time for Halton does this mean that it is not sequential? because I am using Halton as well and I need it to be sequential.

jonathf commented 6 years ago

I think that as long as you keep the polynomial order low enough, you sanity check what you do, you should be fine.

If you want to use Sobol in the way you are using it, I would recommend that you create an pre-computed samples array, and slicing it at each loop iteration. Like:

samples = d2.sample(1000000, rule="S")
while m <= 1000000:

    samples_mapped = d1.inv(d2.fwd(samples[:m]))
    ...

This way the results at one order is more comparable.

In the case of Halton and Hammersley, I haven't implemented persistent seed for them. So if you run you variant of the code, or mine, you get the same results. In other words:

d1.sample(5, rule="M") == d1.sample(5, rule="M")
d1.sample(5, rule="S") != d1.sample(5, rule="S")

That is the difference.

samira-m94 commented 6 years ago

But the fact is that using direct lognormal gives better values in orders around 4 and 5 for simple functions like x^3 So I am wondering this whole thing with lognormal distribution is caused by polynomial chaos expansion method or this it is this package which is not doing well in that area?

jonathf commented 6 years ago

It is a little bit of a give an take here.

On one hand, using log-normal distribution is unstable, which means it likely works great on lower order polynomials, but collapses as you go up.

On the other hand, using a proxy variable comes with the approximation error of having to model your problem in a transformed space. If you take function x^3, then with a log-transformation, you are essentially approximating log(x)^3. As the basis is polynomials, it follows that it should be much easier to approximate the former than the latter.

If you want to see the effect reversed, try approximating the function e^x instead.

samira-m94 commented 6 years ago

Thanks for your good responses. so just the last question is that according to your last response these all are happening because of the unstable nature of the log-normal distribution and it does not have any thing to do with implementation of the polynomial chaos expansion in chaospy?

jonathf commented 6 years ago

It is because of the Log-normal distribution. I think this will give you some insight: www.maths.dur.ac.uk/events/Meetings/LMS/2010/NAMP/Talks/ernst.pdf