Closed odow closed 4 years ago
Closing this since the big issues are done, and the few minor ones aren't a priority.
@odow, I'm having some issues with high-dimension problems (with many states). Looking around, I found this issue here that you mention https://castle.princeton.edu/wp-content/uploads/2020/11/Asamov-Regularized-decomposition-of-high-dimensional-multistage-stochastic-programs-with-Markov-uncertainty.pdf.
Any updates on the Asamov & Powell's quadratic regularization implementation?
Any updates on the Asamov & Powell's quadratic regularization implementation?
Nope. We could discuss off-line perhaps. I tend to think that regularization is dumb. It doesn't make sense to regularize to a previous sample path if the uncertainty is different. It only makes sense if the state variables have very low variance.
SDDP.jl is just not designed for high-dimensional problems.
Features
There is an interesting idea for a train-validation-test stopping rule: train the policy and every so often simulate that policy on a validation dataset. Stop once the policy fails to improve on the validation dataset.Discussion in #3.Implement iteration schemes:- SDDP (N forward, N backward)
- scenario incrementation (1, 2, ..., N forward)
- CUPPS
- Dynamic sequencing protocol https://d-nb.info/1046905090/34 (Wolf's thesis actually has a lot of great stuff aimed at L-shaped nested Benders)
(Covered by #116)@state
macro`I had a lot of issues with this. There seems to be some local scoping/macro hygiene issue passing local variables in Kokako scope through to JuMP. Related issues: https://github.com/JuliaOpt/JuMP.jl/pull/1497, https://github.com/JuliaOpt/JuMP.jl/pull/1517I went the JuMPExtension route. I owe Benoit big-time.Performance improvements
train
option to enable/disable.log10(maximum(abs.(coefficients))) > 7
-6 < log10(minimum(abs.(coefficients))) < -2
log10(maximum(abs.(coefficients)) - minimum(abs.(coefficients))) > 8
Public facing
Logging.