odow / SDDP.jl

A JuMP extension for Stochastic Dual Dynamic Programming
https://sddp.dev
Other
304 stars 62 forks source link

Ideas to improve multistage stochastic integer programs #246

Closed odow closed 1 year ago

odow commented 5 years ago

Motivation

The implementation of SDDiP in SDDP.jl is bad. If you use it, you will probably run into convergence issues! No need to open an issue, I'm aware of the problem. (If you have challenging multistage stochastic integer problems, please send them to me!)

At this point it isn't apparent whether SDDiP is a bad algorithm or a poor implementation in SDDP.jl.

My guess is that it is a bad algorithm almost results in complete enumeration for most problems, but in some cases we can do okay if we implement the following algorithmic improvements.

Clever switching between duality handlers

This should mean that it can be the default for all cases. LP has no performance loss. MIP can only be better, and we don't waste time with the strengthened solve if the problem is naturally integer, or with the Lagrangian solve if it takes too long to solve.

Bounded states

We need bounds on x.in to ensure the dual has a bounded solution. But it isn't obvious what or how to do this.

Advanced Lagrangian solves

The current implementation of the cutting plane solve is very naive. From Jim's slides: https://www.ima.umn.edu/materials/2015-2016/ND8.1-12.16/25411/Luedtke-mip-decomp.pdf

image

Other

Done

Write a theory tutorial

I should write a tutorial for the documentation.

I find how SDDiP is presented as a concept in the literature confusing.

We have these subproblems:

image

and we're trying to find a feasible dual solution and corresponding dual objective value for the slope and intercept of the cut respectively.

You should probably be able to find a feasible solution with the default ContinuousConicDuality option as well. I changed how we handle integer variables in the forward pass during training.

odow commented 3 years ago

@haoxiangyang89 #265 had some of the ideas we discussed the other day.

odow commented 3 years ago

@guyichen09 (cc @mortondp) a use-case for Bayesian Optimization/multi-armed bandit stuff:

Each iteration of SDDP.jl, we have three choices for our duality handler

Each duality handler will lead to cuts that improve our lower bound by some amount, but take a different amount of time. You could normalize each iteration to a reward function of (change in objective / time taken). This is stochastic, and it changes over time.

I want to write a BanditDuality that intelligently picks which handler to run over the iterations. Seems like this is pretty do-able?

guyichen09 commented 3 years ago

Hi Oscar,

I have an idea for this BanditDuailty. I think that the multi-armed bandit algorithm might be a good method for this. We could use a modified version of Thompson Sampling. We would need to modify the reward function to take into account both the change in objective and time taken. What do you think?

Best, Guyi

On Tue, Sep 7, 2021 at 2:56 PM Oscar Dowson @.***> wrote:

@guyichen09 https://github.com/guyichen09 (cc @mortondp https://github.com/mortondp) a use-case for Bayesian Optimization/multi-armed bandit stuff:

Each iteration of SDDP.jl, we have three choices for our duality handler

  • ContinuousConicDuality (a.k.a. continuous benders)
  • StrengthenedConicDuality (a.k.a. strengthened benders)
  • LagrangianDuality (a.k.a. SDDiP)

Each duality handler will lead to cuts that improve our lower bound by some amount, but take a different amount of time. You could normalize each iteration to a reward function of (change in objective / time taken). This is stochastic, and it changes over time.

I want to write a BanditDuality that intelligently picks which handler to run over the iterations. Seems like this is pretty do-able?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/odow/SDDP.jl/issues/246#issuecomment-914583259, or unsubscribe https://github.com/notifications/unsubscribe-auth/APSLDE4CZ6X35M6DDVH47QLUAZU4TANCNFSM4IPP3Y5A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

odow commented 3 years ago

Yeah my suggestion for the reward would be

function reward(log::Vector{Log})
    d_bound = abs(log[end].bound - log[end-1].bound)
    dt = log[end].time - log[end-1].time
    return d_bound / dt
end

And the log is https://github.com/odow/SDDP.jl/blob/ad7a4183895a81afda8ed303389d7d3c92bd58b5/src/algorithm.jl#L938

There's a little bit of plumbing to do, but not too much work.

We should pass prepare_backward_pass(model, options.duality_handler) to prepare_backward_pass(model, options.duality_handler, options.log) https://github.com/odow/SDDP.jl/blob/ad7a4183895a81afda8ed303389d7d3c92bd58b5/src/algorithm.jl#L472 which gets updated here: https://github.com/odow/SDDP.jl/blob/ad7a4183895a81afda8ed303389d7d3c92bd58b5/src/plugins/duality_handlers.jl#L64-L71 here https://github.com/odow/SDDP.jl/blob/ad7a4183895a81afda8ed303389d7d3c92bd58b5/src/plugins/duality_handlers.jl#L131-L136 and here https://github.com/odow/SDDP.jl/blob/ad7a4183895a81afda8ed303389d7d3c92bd58b5/src/plugins/headers.jl#L185-L192

Then we can write a new method for BanditDuality.

It should probably start with a minimum of say, 10 iterations of ContinuousConicDuality to build a prior. We could then do a hack and use initial priors of say, 2/3 the variance for StrengthenedConicDuality and 1/3 the variance for LagrangianDuality (i.e., just mean-revert the realizations by some amount).

odow commented 3 years ago

We could use this bayesian learning in a few other contexts:

Trajectory depth in cyclic graphs

The default sampling scheme for the forward pass has some tunable parameters: https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/sampling_schemes.jl#L38-L43

In particular, for cyclic graphs, you might want to learn between

A = InSampleMonteCarlo(
    max_depth = length(model.nodes),
    terminate_on_dummy_leaf = false,
)

B = InSampleMonteCarlo(
    max_depth = 5 * length(model.nodes),
    terminate_on_dummy_leaf = false,
)

C = InSampleMonteCarlo()

Revisiting forward passes

Rather than just looping through forward passes: https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/forward_passes.jl#L136-L158 we should learn which ones to revisit according to some score. So you look at the delta achieved by each forward pass, as well as the expected delta of sampling a new trajectory. And then pick the one that has the best.

I think this is both computationally efficient, and helpful from a marketing aspect, because we can claim machine learning is being used to drive decision making within the algorithm.

odow commented 3 years ago

@guyichen09 here's a potentially open question: is there any work on learning when the reward is not-stationary?

In our case, we know that the reward for each arm will tend towards 0 as it gets pulled. The history is also action-dependent, which means assumptions about the distribution of reward being independent of the algorithm are not satisfied.

Some papers:

mortondp commented 3 years ago

I’m with you. Good idea.

When Daniel was looking at DRO / SDDP we looked at sampling forward paths according to the nominal distribution or according to the DRO distribution. In theory, the latter can get you in trouble with respect to not converging due to zero-probability branches. We did a convex combo / mixture. Lower bounds were significantly tighter when putting enough weight on DRO paths, meaning that I think your idea would make sense, at least according to some metrics. We also looked at out-of-sample quality of the resulting policy. That difference was less dramatic, but there may have been something there, too. Can’t remember for sure.

Dave

On Sep 18, 2021, at 12:38 AM, Oscar Dowson @.**@.>> wrote:

We could use this bayesian learning in a few other contexts:

Trajectory depth in cyclic graphs

The default sampling scheme for the forward pass has some tunable parameters: https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/sampling_schemes.jl#L38-L43https://urldefense.com/v3/__https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/sampling_schemes.jl*L38-L43__;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMONAhXHRw$

In particular, for cyclic graphs, you might want to learn between

A = InSampleMonteCarlo( max_depth = length(model.nodes), terminate_on_dummy_leaf = false, )

B = InSampleMonteCarlo( max_depth = 5 * length(model.nodes), terminate_on_dummy_leaf = false, )

C = InSampleMonteCarlo()

Revisiting forward passes

Rather than just looping through forward passes: https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/forward_passes.jl#L136-L158https://urldefense.com/v3/__https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/forward_passes.jl*L136-L158__;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMP2JH-UPA$ we should learn which ones to revisit according to some score. So you look at the delta achieved by each forward pass, as well as the expected delta of sampling a new trajectory. And then pick the one that has the best.

I think this is both computationally efficient, and helpful from a marketing aspect, because we can claim machine learning is being used to drive decision making within the algorithm.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://urldefense.com/v3/__https://github.com/odow/SDDP.jl/issues/246*issuecomment-922196850__;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMP9GOiq6g$, or unsubscribehttps://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AJBB7ENN5YS5NFH32IGZF5LUCQQV7ANCNFSM4IPP3Y5A__;!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMNv02dObg$. Triage notifications on the go with GitHub Mobile for iOShttps://urldefense.com/v3/__https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675__;!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMO1wGwTQA$ or Androidhttps://urldefense.com/v3/__https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign*3Dnotification-email*26utm_medium*3Demail*26utm_source*3Dgithub__;JSUlJSU!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMNQG9lXCw$.

guyichen09 commented 3 years ago

Hi Oscar,

I am not sure if there is any work about non-stationary rewards. During my internship, I found an algorithm using Thompson Sampling for the combinatorial multi-armed bandit problem. In this algorithm, the reward is deterministic in each pull during an iteration; however, from entire algorithm iteration to iteration, the reward is changing and tends to go to zero. They use a Bernoulli random variable to account for that. The specific algorithm is attached below. [image: Screen Shot 2021-09-24 at 11.16.09 AM.png]

Line 9 - 10 is where they deal with the reward.

I think it is a possible option for what we are looking for. I will also research more on this topic.

Best,

Guyi

On Sun, Sep 19, 2021 at 11:24 AM mortondp @.***> wrote:

I’m with you. Good idea.

When Daniel was looking at DRO / SDDP we looked at sampling forward paths according to the nominal distribution or according to the DRO distribution. In theory, the latter can get you in trouble with respect to not converging due to zero-probability branches. We did a convex combo / mixture. Lower bounds were significantly tighter when putting enough weight on DRO paths, meaning that I think your idea would make sense, at least according to some metrics. We also looked at out-of-sample quality of the resulting policy. That difference was less dramatic, but there may have been something there, too. Can’t remember for sure.

Dave

On Sep 18, 2021, at 12:38 AM, Oscar Dowson @.**@.>> wrote:

We could use this bayesian learning in a few other contexts:

Trajectory depth in cyclic graphs

The default sampling scheme for the forward pass has some tunable parameters:

https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/sampling_schemes.jl#L38-L43 < https://urldefense.com/v3/__https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/sampling_schemes.jl*L38-L43__;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMONAhXHRw$>

In particular, for cyclic graphs, you might want to learn between

A = InSampleMonteCarlo( max_depth = length(model.nodes), terminate_on_dummy_leaf = false, )

B = InSampleMonteCarlo( max_depth = 5 * length(model.nodes), terminate_on_dummy_leaf = false, )

C = InSampleMonteCarlo()

Revisiting forward passes

Rather than just looping through forward passes:

https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/forward_passes.jl#L136-L158 < https://urldefense.com/v3/__https://github.com/odow/SDDP.jl/blob/493b48e256a88cb1cc898497d01a4d3245d3ffd0/src/plugins/forward_passes.jl*L136-L158__;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMP2JH-UPA$>

we should learn which ones to revisit according to some score. So you look at the delta achieved by each forward pass, as well as the expected delta of sampling a new trajectory. And then pick the one that has the best.

I think this is both computationally efficient, and helpful from a marketing aspect, because we can claim machine learning is being used to drive decision making within the algorithm.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub< https://urldefense.com/v3/__https://github.com/odow/SDDP.jl/issues/246*issuecomment-922196850__;Iw!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMP9GOiq6g$>, or unsubscribe< https://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AJBB7ENN5YS5NFH32IGZF5LUCQQV7ANCNFSM4IPP3Y5A__;!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMNv02dObg$>.

Triage notifications on the go with GitHub Mobile for iOS< https://urldefense.com/v3/__https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675__;!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMO1wGwTQA$> or Android< https://urldefense.com/v3/__https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign*3Dnotification-email*26utm_medium*3Demail*26utm_source*3Dgithub__;JSUlJSU!!Dq0X2DkFhyF93HkjWTBQKhk!CmdXgj4D2n2FtwVS04NfPQw0OYHLBGX88OFuL6YlBabRpa4NuF-vNmRLDT4MwHTLxMNQG9lXCw$>.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/odow/SDDP.jl/issues/246#issuecomment-922499937, or unsubscribe https://github.com/notifications/unsubscribe-auth/APSLDE4A5GOLMVTIPJR56R3UCYFEZANCNFSM4IPP3Y5A . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

odow commented 2 years ago

A few updates for this:

https://github.com/odow/SDDP.jl/blob/23fcaa9085e6ec43e1e0423b4e64c078951e8101/src/plugins/duality_handlers.jl#L310-L412

https://github.com/odow/SDDP.jl/blob/23fcaa9085e6ec43e1e0423b4e64c078951e8101/src/plugins/local_improvement_search.jl#L23-L130

odow commented 1 year ago

Closing because I think this is a dead end. The cost of Lagrangian duality just doesn't improve things.

I've seen a few papers recently claiming to "do SDDiP" but they have mixed-integer-continuous state and control variables, and then sneakily say that "we just use the continuous dual, it might give a worse bound but it works well." And it does. So we don't need this.