-
I went through some tutorials but still confused.
I hope you can help me solve my problem @odow :
I will simplify it as follows:
grid_energy (t)= Load(t) - PV_energy (t) + Cb * u (t)
I …
-
Hi & thank you for this package!
I'd like to use it to solve a very simple (routine) problem in economics.
![image](https://user-images.githubusercontent.com/7883904/117059196-872c8300-aced-11eb…
-
Hi again, Oscar!
I have another query but I wanted to make another Issue since users might also be interested on it.
Anyway, I am currently modelling a certain problem where I have a control va…
-
There are two uses for this:
## Revisit old forward passes
In the serial world, we could run a few forward-backward iterations. Then stop. And perform a single backward pass, adding cut at every…
-
hello oscar
One question, is there any way to print a state variable (var.in) during the SDDP train?
-
Hello!
I have a (relatively large) problem with Markovian uncertainty. I have the conditional probabilities per stage in a JSON file. I tried making a parser but MarkovianGraph won't read it becaus…
-
Hi Odow!
Im working in a minimization, risk neutral, with stagewise independence, with one random variable, in your library.
But Im finding something weird.
After I train the SDDP with a bound …
-
Hi everyone, thank you for your work on this solver interface.
According to MathOptInterface documentation, all optimizers should be verbose by default:
> Every optimizer should have verbosity o…
-
A specification like this fails.
```Julia
model = SDDP.MarkovianPolicyGraph(
transition_matrices = problem.ext["MARKOV_TRANSITION"],
sense = :Min,
lower_bound = 0.0…
-
Thank you for sharing your work.
Does this library support a min-max problem? In other words, Can I have the max function in the objective function which I try to minimize:
Is the SDDP able to s…