Closed dannyopts closed 4 months ago
Having a poke around in https://github.com/odow/SDDP.jl/blob/496c3c30d66c2961485946095f5f0e2015250db2/src/plugins/bellman_functions.jl#L135 but not yet having fully parsed it yet, it would seem that a cut is deleted if it is not the best lower bound for one atleast one of the sampled states?
But if my thinking is correct, the fact it is not providing a lower bound on the sampled points doesnt mean it is not providing a lower bound for other points in the domain which could mean we end up with a previously infeasible point becoming feasible (as we have seen in the case above).
This seems fine and the comment in the documentation should be caveated to include the fact the bound can decrease due to cut deletion?
Yes, as you figured out, we implement a form of cut pruning that--in rare cases, particularly with cyclic graphs--can result in the lower bound being non-monotonic.
You can pass SDDP.train(model; cut_deletion_minimum = 1_000)
(or some other large number) to disable this.
https://github.com/odow/SDDP.jl/blob/496c3c30d66c2961485946095f5f0e2015250db2/src/algorithm.jl#L906
the documentation should be caveated to include the fact the bound can decrease due to cut deletion?
Okay :smile:
Thanks!
Hi,
I have noticed that the bound in the iteration log is sometimes not monotonic increasing, although the docs state "bound: this is a lower bound (upper if maximizing) for the value of the optimal policy. This should be monotonically improving (increasing if minimizing, decreasing if maximizing)." https://sddp.dev/stable/tutorial/first_steps/. I have attached an iteration log at the bottom of this issue which demonstrates this.
The policy graph I am using is cyclical, and the bound is mostly monotonicly increasing, but not always.
Does the bound not being monotonic suggest numerical errors, or that I have some how violated one of the assumptions of the model?
I have generated the lp files for the first two iterations and attached below and it seems that the cut made in the first iteration is subsequently removed. I dont understand the logic for cut deletion, but it seems strange to me that a cut would be deleted between iteration 1 and 2, particularly when the the new one doesnt dominate the previous one?
example iteration log:
iteration simulation bound time (s) solves pid
LP file for single child of root node after iteration 1:
LP file for single child of root node after iteration 2:
Just incase I have done something stupid I have included the steps I went through to get these files below.
The policy graph is generated as
construct the model
Train for a single iteration
Write out subproblem for single child of root node
Train for another iteration
Write out subproblem for single child of root node
Thank you so much for all your hard work with this awesome library and also for investing so much time into writing some brilliant documentation 😍