odow / SDDP.jl

A JuMP extension for Stochastic Dual Dynamic Programming
https://sddp.dev
Other
309 stars 61 forks source link

Some troubles with the use of OutOfSampleMonteCarlo #397

Closed sarpon closed 3 years ago

sarpon commented 3 years ago

Hi Odow!

Im working in a minimization, risk neutral, with stagewise independence, with one random variable, in your library. But Im finding something weird.

After I train the SDDP with a bound stalling criteria or number of iteration criteria, I run a simulation in with an out of sample montercarlo. Then with this set of simulation I get the sum of the objectives values of every stage to estimate an 99% confidence interval, and get the upper bound it. (Next the code of this paragraph)

SDDP.train(
    m, 
    cut_deletion_minimum = 100, 
    refine_at_similar_nodes = true, 
    stopping_rules = [SDDP.BoundStalling(10, 1e-4)],
)
sampling_scheme = SDDP.OutOfSampleMonteCarlo(
    m, 
    use_insample_transition = true,
) do node
    stage = node
    if stage  > 0
        if stage <= 15
            support = rand(Normal(0, 0.068), 100)
            support = [i + 0.1 for i in support]
            probabilities = [1/size(support)[1] for i in support]
            return [SDDP.Noise(w,p) for (w,p) in zip(support, probabilities)]
        else
            support = rand(Normal(0, 0.72), 100)
            support = [i + 0.2 for i in support]
            probabilities = [1/size(support)[1] for i in support]
            return [SDDP.Noise(w,p) for (w,p) in zip(support, probabilities)]
        end
    end
end
global simulations = SDDP.simulate(m, 100, sampling_scheme = sampling_scheme)
global UBE = [
    sum(stage[:stage_objective] for stage in sim) for sim in simulations
]
Alpha = 0.99
S_ube = sqrt(1/(100-1)*sum((UBE .- mean(UBE)).^2)) #standar deviation
Ts_nU = quantile(TDist(100-1),1-(1-Alpha)/2)
global eU = Ts_nU*S_ube/sqrt(100)
upper_bound = mean(UBE) + eU
lower_bound = SDDP.calculate_bound(m)

The thing that Im finding weird is that the variable "upper_bound" it is always less than the variable "lower_bound". But when I do an InSample simulation I dont observe the same thing.

I have tried a different set of thing:

Sorry if my english is kind of broken... let me know if i wasnt clear in some part of my question.

Kind Regards

Sebastian Arpon

odow commented 3 years ago

Did you check that the stimulations are the correct length and visit the right nodes? (Looks like your graph is linear?)

I'd have to see the full model to say more.

Are the out-of-sample noise terms drawn from the same distribution? If not, then anything can happen. You shouldn't necessarily expect that the out-of-sample simulation does worse than the in-sample lower bound.

Is there something weird about your training scenarios?

What happens if you do more out-of-sample simulations (e.g., 10,000)?

If your model is simple enough, you could email me the model.

sarpon commented 3 years ago

q: Did you check that the stimulations are the correct length and visit the right nodes? a: Yes my graph is linear, and I have seen the length of the simulations and the visiting nodes and they are good.

q: Looks like your graph is linear? a: Yes the graph is linear

q: I'd have to see the full model to say more. If your model is simple enough, you could email me the model. a: I can share the model with you but the problem is it is not a simple model and has a lot of tiny detail... I think it would be better that I present you the model in a meeting in order to make it more clear.

q: Are the out-of-sample noise terms drawn from the same distribution? If not, then anything can happen. You shouldn't necessarily expect that the out-of-sample simulation does worse than the in-sample lower bound. a: Yes, the insamples and outofsamples are drawn from the same distribution.

q: Is there something weird about your training scenarios? a: mmm I dont think so they are drawn from normal distributions.

q: What happens if you do more out-of-sample simulations (e.g., 10,000)? a: Im gonna give it a try

odow commented 3 years ago

Send me an email, or ask Bernardo for my WhatsApp and we can follow up offline.

odow commented 3 years ago

Closing because the out-of-sample data was not the same distribution.