odow / SDDP.jl

A JuMP extension for Stochastic Dual Dynamic Programming
https://sddp.dev
Other
309 stars 61 forks source link

MSPFormat issues #603

Closed odow closed 1 year ago

odow commented 1 year ago

Semiconductor_Production

@bonnkleiford, the Semiconductor_Production problem includes this constraint:

{
  "name":"",
  "type":"LEQ",
  "lhs":[
    {"name":"v_000","stage":2,"coefficient":[7.0]},
    {"name":"v_001","stage":2,"coefficient":[8.0]},
    {"name":"v_010","stage":2,"coefficient":[8.0]},
    {"name":"v_011","stage":2,"coefficient":[5.0]},
    {"name":"v_020","stage":2,"coefficient":[9.0]},
    {"name":"v_021","stage":2,"coefficient":[5.0]},
    {"name":"v_030","stage":2,"coefficient":[9.0]},
    {"name":"v_031","stage":2,"coefficient":[8.0]},
    {"name":"v_040","stage":2,"coefficient":[9.0]},
    {"name":"v_041","stage":2,"coefficient":[5.0]},
    {"name":"v_050","stage":2,"coefficient":[9.0]},
    {"name":"v_051","stage":2,"coefficient":[5.0]},
    {"name":"v_060","stage":2,"coefficient":[6.0]},
    {"name":"v_061","stage":2,"coefficient":[7.0]},
    {"name":"v_070","stage":2,"coefficient":[6.0]},
    {"name":"v_071","stage":2,"coefficient":[8.0]},
    {"name":"v_080","stage":2,"coefficient":[6.0]},
    {"name":"v_081","stage":2,"coefficient":[9.0]},
    {"name":"v_090","stage":2,"coefficient":[8.0]},
    {"name":"v_091","stage":2,"coefficient":[9.0]},
    {"name":"x_0","stage":0,"coefficient":[-7.0]},
    {"name":"x_0","stage":1,"coefficient":[-7.0]}
  ],
  "rhs":[0.0]
}

It has variables from stage 0, 1, and 2. Am I to infer that the state variable is really a lag-two state? And so we need to create 2*N state variables?

Fish_Selling

Also, what's the optimal bound for the fish problem? I find an optimal policy after a single iteration, which doesn't seem right...

bonnkleiford commented 1 year ago

SEMI-CONDUCTOR PRODUCTION PROBLEM

Sorry about this. I think I took the wrong model (which should be unbounded). Attaching the new files for this.

Also, here is the SDDP.jl formulation for this problem:

using SDDP, GLPK, Test, Random, Distributions
using LinearAlgebra

# INPUT
T = 4
I = 10
J = 10
K = 2
S = 100

alpha_0 =  [686, 784, 540, 641, 1073, 1388, 1727, 1469, 586, 515]
beta_0 = [174, 115, 92, 116, 93, 164, 190, 174, 190, 200]
c = [7,17,11,16,18,7,7,9,8,14]
d_0 = [607,943,732,1279,434,378,1964,430, 410, 525]

d_perturb =  [0.0422902245, 0.0549456137, 0.0868569685, 0.0950609064,
0.0538731273, 0.0917075818, 0.0673065114, 0.0594680277, 0.0544299191,
0.0782010312]
beta_perturb =  [0.0129739644, 0.063853852, 0.0925580104, 0.0766634092,
0.0953244752, 0.0563760149, 0.075759652, 0.0583249427, 0.0324810132,
0.0694020021]
alpha_perturb =  [0.0638533975, 0.068050401, 0.0747693903, 0.0514849591,
0.0323470258, 0.0480910211, 0.0304004586, 0.0976094813, 0.0694752024,
0.0703992735, 0.0775236862]

a = rand(5:9, (I,J,K))

_generate(x_perturb, x_0, i) = round(Int, x_0[i] * rand(Normal(1,
x_perturb[i])))

support = map(1:T) do t
    return map(1:S) do s
        return (
            D = map(j -> _generate(d_perturb, d_0, j), 1:J),
            A = map(i -> _generate(alpha_perturb, alpha_0, i), 1:I),
            B = map(j -> _generate(beta_perturb, beta_0, j), 1:J),
        )
    end
end

Tau = 1

model = SDDP.LinearPolicyGraph(
    stages = T,
    sense = :Min,
    lower_bound = 0.0,
    optimizer = GLPK.Optimizer
) do subproblem, t
    # Define the state variable.
    @variable(subproblem,
        0 <= X[i = 1:I],
        SDDP.State, initial_value = 0)

    # Define the control variables.
    @variables(subproblem, begin
        0 <= u[j=1:J]
        0 <= v[i=1:I, j=1:J, k=1:K]
        0 <= w[j=1:J]
        0 <= x[i=1:I]
        alpha[i=1:I] == alpha_0[i]
        beta[j=1:J] == beta_0[j]
        demand[j=1:J] == d_0[j]
    end)

    @constraints(subproblem, begin
        [i=1:I], X[i].out == X[i].in + x[i]
        [i=1:I], sum(sum(a[i,j,k]*v[i,j,k] for k in 1:K) for j in 1:J) <=
c[i]*X[i].out
        [j=1:J], w[j] + u[j] >= demand[j]
        [j=1:J,k=1:K], sum(v[i,j,k] for i in 1:I) >= w[j]
        end)

    SDDP.parameterize(subproblem, support[t]) do ω
        @stageobjective(subproblem, ω.A' * x + ω.B' * u)
        JuMP.fix.(demand , ω.D)
    end
end

FISH SELLING

Expected bound is 1226.571923 and the simulated bound is 1244.454533.

On Tue, May 2, 2023 at 11:32 PM Oscar Dowson @.***> wrote:

Semiconductor_Production

@bonnkleiford https://github.com/bonnkleiford, the Semiconductor_Production problem includes this constraint:

{ "name":"", "type":"LEQ", "lhs":[ {"name":"v_000","stage":2,"coefficient":[7.0]}, {"name":"v_001","stage":2,"coefficient":[8.0]}, {"name":"v_010","stage":2,"coefficient":[8.0]}, {"name":"v_011","stage":2,"coefficient":[5.0]}, {"name":"v_020","stage":2,"coefficient":[9.0]}, {"name":"v_021","stage":2,"coefficient":[5.0]}, {"name":"v_030","stage":2,"coefficient":[9.0]}, {"name":"v_031","stage":2,"coefficient":[8.0]}, {"name":"v_040","stage":2,"coefficient":[9.0]}, {"name":"v_041","stage":2,"coefficient":[5.0]}, {"name":"v_050","stage":2,"coefficient":[9.0]}, {"name":"v_051","stage":2,"coefficient":[5.0]}, {"name":"v_060","stage":2,"coefficient":[6.0]}, {"name":"v_061","stage":2,"coefficient":[7.0]}, {"name":"v_070","stage":2,"coefficient":[6.0]}, {"name":"v_071","stage":2,"coefficient":[8.0]}, {"name":"v_080","stage":2,"coefficient":[6.0]}, {"name":"v_081","stage":2,"coefficient":[9.0]}, {"name":"v_090","stage":2,"coefficient":[8.0]}, {"name":"v_091","stage":2,"coefficient":[9.0]}, {"name":"x_0","stage":0,"coefficient":[-7.0]}, {"name":"x_0","stage":1,"coefficient":[-7.0]} ], "rhs":[0.0] }

It has variables from stage 0, 1, and 2. Am I to infer that the state variable is really a lag-two state? And so we need to create 2*N state variables? Fish_Selling

Also, what's the optimal bound for the fish problem? I find an optimal policy after a single iteration, which doesn't seem right...

— Reply to this email directly, view it on GitHub https://github.com/odow/SDDP.jl/issues/603, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJAVHMXTQAOQMFNTZMD7F4LXEHGTJANCNFSM6AAAAAAXT3CG6M . You are receiving this because you were mentioned.Message ID: <odow/SDDP. @.***>

-- With much respect,

Bonn Kleiford D. Seranilla, MOST PhD in Operations and Supply Chain Management Luxembourg Center for Logistics and Supply Chain Management University of Luxembourg @. @.> @.***

bonnkleiford commented 1 year ago

On Tue, May 2, 2023 at 11:50 PM Bikey Bonn Kleiford Seranilla < @.***> wrote:

SEMI-CONDUCTOR PRODUCTION PROBLEM

Sorry about this. I think I took the wrong model (which should be unbounded). Attaching the new files for this.

Also, here is the SDDP.jl formulation for this problem:

`using SDDP, GLPK, Test, Random, Distributions using LinearAlgebra

INPUT

T = 4 I = 10 J = 10 K = 2 S = 100

alpha_0 = [686, 784, 540, 641, 1073, 1388, 1727, 1469, 586, 515] beta_0 = [174, 115, 92, 116, 93, 164, 190, 174, 190, 200] c = [7,17,11,16,18,7,7,9,8,14] d_0 = [607,943,732,1279,434,378,1964,430, 410, 525]

d_perturb = [0.0422902245, 0.0549456137, 0.0868569685, 0.0950609064, 0.0538731273, 0.0917075818, 0.0673065114, 0.0594680277, 0.0544299191, 0.0782010312] beta_perturb = [0.0129739644, 0.063853852, 0.0925580104, 0.0766634092, 0.0953244752, 0.0563760149, 0.075759652, 0.0583249427, 0.0324810132, 0.0694020021] alpha_perturb = [0.0638533975, 0.068050401, 0.0747693903, 0.0514849591, 0.0323470258, 0.0480910211, 0.0304004586, 0.0976094813, 0.0694752024, 0.0703992735, 0.0775236862]

a = rand(5:9, (I,J,K))

_generate(x_perturb, x_0, i) = round(Int, x_0[i] * rand(Normal(1, x_perturb[i])))

support = map(1:T) do t return map(1:S) do s return ( D = map(j -> _generate(d_perturb, d_0, j), 1:J), A = map(i -> _generate(alpha_perturb, alpha_0, i), 1:I), B = map(j -> _generate(beta_perturb, beta_0, j), 1:J), ) end end

Tau = 1

model = SDDP.LinearPolicyGraph( stages = T, sense = :Min, lower_bound = 0.0, optimizer = GLPK.Optimizer ) do subproblem, t

Define the state variable.

@variable(subproblem,
    0 <= X[i = 1:I],
    SDDP.State, initial_value = 0)

# Define the control variables.
@variables(subproblem, begin
    0 <= u[j=1:J]
    0 <= v[i=1:I, j=1:J, k=1:K]
    0 <= w[j=1:J]
    0 <= x[i=1:I]
    alpha[i=1:I] == alpha_0[i]
    beta[j=1:J] == beta_0[j]
    demand[j=1:J] == d_0[j]
end)

@constraints(subproblem, begin
    [i=1:I], X[i].out == X[i].in + x[i]
    [i=1:I], sum(sum(a[i,j,k]*v[i,j,k] for k in 1:K) for j in 1:J) <=

c[i]*X[i].out [j=1:J], w[j] + u[j] >= demand[j] [j=1:J,k=1:K], sum(v[i,j,k] for i in 1:I) >= w[j] end)

SDDP.parameterize(subproblem, support[t]) do ω
    @stageobjective(subproblem, ω.A' * x + ω.B' * u)
    JuMP.fix.(demand , ω.D)
end

end `

FISH SELLING

Expected bound is 1226.571923 and the simulated bound is 1244.454533.

On Tue, May 2, 2023 at 11:32 PM Oscar Dowson @.***> wrote:

Semiconductor_Production

@bonnkleiford https://github.com/bonnkleiford, the Semiconductor_Production problem includes this constraint:

{ "name":"", "type":"LEQ", "lhs":[ {"name":"v_000","stage":2,"coefficient":[7.0]}, {"name":"v_001","stage":2,"coefficient":[8.0]}, {"name":"v_010","stage":2,"coefficient":[8.0]}, {"name":"v_011","stage":2,"coefficient":[5.0]}, {"name":"v_020","stage":2,"coefficient":[9.0]}, {"name":"v_021","stage":2,"coefficient":[5.0]}, {"name":"v_030","stage":2,"coefficient":[9.0]}, {"name":"v_031","stage":2,"coefficient":[8.0]}, {"name":"v_040","stage":2,"coefficient":[9.0]}, {"name":"v_041","stage":2,"coefficient":[5.0]}, {"name":"v_050","stage":2,"coefficient":[9.0]}, {"name":"v_051","stage":2,"coefficient":[5.0]}, {"name":"v_060","stage":2,"coefficient":[6.0]}, {"name":"v_061","stage":2,"coefficient":[7.0]}, {"name":"v_070","stage":2,"coefficient":[6.0]}, {"name":"v_071","stage":2,"coefficient":[8.0]}, {"name":"v_080","stage":2,"coefficient":[6.0]}, {"name":"v_081","stage":2,"coefficient":[9.0]}, {"name":"v_090","stage":2,"coefficient":[8.0]}, {"name":"v_091","stage":2,"coefficient":[9.0]}, {"name":"x_0","stage":0,"coefficient":[-7.0]}, {"name":"x_0","stage":1,"coefficient":[-7.0]} ], "rhs":[0.0] }

It has variables from stage 0, 1, and 2. Am I to infer that the state variable is really a lag-two state? And so we need to create 2*N state variables? Fish_Selling

Also, what's the optimal bound for the fish problem? I find an optimal policy after a single iteration, which doesn't seem right...

— Reply to this email directly, view it on GitHub https://github.com/odow/SDDP.jl/issues/603, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJAVHMXTQAOQMFNTZMD7F4LXEHGTJANCNFSM6AAAAAAXT3CG6M . You are receiving this because you were mentioned.Message ID: <odow/SDDP. @.***>

-- With much respect,

Bonn Kleiford D. Seranilla, MOST PhD in Operations and Supply Chain Management Luxembourg Center for Logistics and Supply Chain Management University of Luxembourg @. @.> @.***

-- With much respect,

Bonn Kleiford D. Seranilla, MOST PhD in Operations and Supply Chain Management Luxembourg Center for Logistics and Supply Chain Management University of Luxembourg @. @.> @.***

odow commented 1 year ago

The fish example is trivial then. A myopic policy is optimal?

julia> SDDP.train(fish; iteration_limit = 10)
-------------------------------------------------------------------
         SDDP.jl (c) Oscar Dowson and contributors, 2017-23
-------------------------------------------------------------------
problem
  nodes           : 4000
  state variables : 1
  scenarios       : 1.00000e+12
  existing cuts   : false
options
  solver          : serial mode
  risk measure    : SDDP.Expectation()
  sampling scheme : SDDP.InSampleMonteCarlo
subproblem structure
  VariableRef                             : [6, 6]
  VariableRef in MOI.LessThan{Float64}    : [1, 1]
  VariableRef in MOI.GreaterThan{Float64} : [5, 5]
  AffExpr in MOI.EqualTo{Float64}         : [1, 2]
numerical stability report
  matrix range     [1e+00, 1e+00]
  objective range  [1e+00, 4e+01]
  bounds range     [1e+06, 1e+06]
  rhs range        [1e+01, 3e+01]
-------------------------------------------------------------------
 iteration    simulation      bound        time (s)     solves  pid
-------------------------------------------------------------------
         1   1.068115e+03  1.226548e+03  1.471645e+01      4004   1
         2   1.228612e+03  1.226548e+03  1.744399e+01      8008   1
         3   1.211700e+03  1.226548e+03  1.986566e+01     12012   1
         4   1.235034e+03  1.226548e+03  2.211863e+01     16016   1
         5   1.254383e+03  1.226548e+03  2.455196e+01     20020   1
         6   1.604841e+03  1.226548e+03  2.686422e+01     24024   1
         7   1.020962e+03  1.226548e+03  2.930395e+01     28028   1
         8   1.236801e+03  1.226548e+03  3.167676e+01     32032   1
         9   1.484657e+03  1.226548e+03  3.414848e+01     36036   1
        10   1.172781e+03  1.226548e+03  3.660523e+01     40040   1
-------------------------------------------------------------------
status         : iteration_limit
total time (s) : 3.660523e+01
total solves   : 40040
best bound     :  1.226548e+03
simulation ci  :  1.251789e+03 ± 1.083015e+02
numeric issues : 0
-------------------------------------------------------------------

and the simulated bound is 1244.454533.

I don't think it makes sense to talk about simulated bounds unless you provide the scenarios. What is the confidence interval?

bonnkleiford commented 1 year ago

Yes. This was actually a working example I was given when we were sending emails to some people. Maybe not wise to include it.

QUASAR only reports the standard error though which is 29.536742.

But yeah, [almost] exactly the same expected bound.

On Wed, May 3, 2023 at 12:05 AM Oscar Dowson @.***> wrote:

The fish example is trivial then. A myopic policy is optimal?

julia> SDDP.train(fish; iteration_limit = 10)

     SDDP.jl (c) Oscar Dowson and contributors, 2017-23

problem nodes : 4000 state variables : 1 scenarios : 1.00000e+12 existing cuts : false options solver : serial mode risk measure : SDDP.Expectation() sampling scheme : SDDP.InSampleMonteCarlo subproblem structure VariableRef : [6, 6] VariableRef in MOI.LessThan{Float64} : [1, 1] VariableRef in MOI.GreaterThan{Float64} : [5, 5] AffExpr in MOI.EqualTo{Float64} : [1, 2] numerical stability report matrix range [1e+00, 1e+00] objective range [1e+00, 4e+01] bounds range [1e+06, 1e+06] rhs range [1e+01, 3e+01]

iteration simulation bound time (s) solves pid

     1   1.068115e+03  1.226548e+03  1.471645e+01      4004   1
     2   1.228612e+03  1.226548e+03  1.744399e+01      8008   1
     3   1.211700e+03  1.226548e+03  1.986566e+01     12012   1
     4   1.235034e+03  1.226548e+03  2.211863e+01     16016   1
     5   1.254383e+03  1.226548e+03  2.455196e+01     20020   1
     6   1.604841e+03  1.226548e+03  2.686422e+01     24024   1
     7   1.020962e+03  1.226548e+03  2.930395e+01     28028   1
     8   1.236801e+03  1.226548e+03  3.167676e+01     32032   1
     9   1.484657e+03  1.226548e+03  3.414848e+01     36036   1
    10   1.172781e+03  1.226548e+03  3.660523e+01     40040   1

status : iteration_limit total time (s) : 3.660523e+01 total solves : 40040 best bound : 1.226548e+03 simulation ci : 1.251789e+03 ± 1.083015e+02 numeric issues : 0

and the simulated bound is 1244.454533.

I don't think it makes sense to talk about simulated bounds unless you provide the scenarios. What is the confidence interval?

— Reply to this email directly, view it on GitHub https://github.com/odow/SDDP.jl/issues/603#issuecomment-1532432742, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJAVHMRVKY4XVQPQ3RRCC6TXEHKR7ANCNFSM6AAAAAAXT3CG6M . You are receiving this because you were mentioned.Message ID: <odow/SDDP. @.***>

-- With much respect,

Bonn Kleiford D. Seranilla, MOST PhD in Operations and Supply Chain Management Luxembourg Center for Logistics and Supply Chain Management University of Luxembourg @. @.> @.***

odow commented 1 year ago

Maybe not wise to include it.

It'd be a nice test to see if a solver could detect the optimal myopic policy and terminate early.

But yeah, [almost] exactly the same expected bound.

Difference is likely a tolerance issue between Gurobi and HiGHS.

QUASAR only reports the standard error though which is 29.536742.

This is why you need to provide the validation scenarios and ask the solver to report objective and primal solutions for those scenarios: https://odow.github.io/StochOptFormat/#evaluating-the-policy

Otherwise, I want to know how many scenarios you simulated, how you sampled them, and what the true distribution of costs is. In most cases, the distribution of costs is not normally distributed, so a simple mean + standard error is not sufficient to determine whether two policies are equivalent. It also isn't enough to decide if two policies are comparable under some risk measure. Perhaps one solver uses some sampling to ensure that the tails of the distribution are more accurately sampled.

bonnkleiford commented 1 year ago

It'd be a nice test to see if a solver could detect the optimal myopic policy and terminate early.

Actually, that's true. I'll bring this up to Nils.

Difference is likely a tolerance issue between Gurobi and HiGHS.

Ahhh yes. That's true.

QUASAR has the function to evaluate/simulate the policy given a number of samples. This is on my list of questions to Nils - how to generate the scenarios and serialize as a JSON for MSPFormat. This should definitely be possible. And yes, I agree with the comparison under some risk measure as well.

I really appreciate all these and it gives me a deeper appreciation of implementation design of algorithms.

On Wed, May 3, 2023 at 12:21 AM Oscar Dowson @.***> wrote:

Maybe not wise to include it.

It'd be a nice test to see if a solver could detect the optimal myopic policy and terminate early.

But yeah, [almost] exactly the same expected bound.

Difference is likely a tolerance issue between Gurobi and HiGHS.

QUASAR only reports the standard error though which is 29.536742.

This is why you need to provide the validation scenarios and ask the solver to report objective and primal solutions for those scenarios: https://odow.github.io/StochOptFormat/#evaluating-the-policy

Otherwise, I want to know how many scenarios you simulated, how you sampled them, and what the true distribution of costs is. In most cases, the distribution of costs is not normally distributed, so a simple mean + standard error is not sufficient to determine whether two policies are equivalent. It also isn't enough to decide if two policies are comparable under some risk measure. Perhaps one solver uses some sampling to ensure that the tails of the distribution are more accurately sampled.

— Reply to this email directly, view it on GitHub https://github.com/odow/SDDP.jl/issues/603#issuecomment-1532439847, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJAVHMUXCTDZKZGIYDHSS5TXEHMNPANCNFSM6AAAAAAXT3CG6M . You are receiving this because you were mentioned.Message ID: <odow/SDDP. @.***>

-- With much respect,

Bonn Kleiford D. Seranilla, MOST PhD in Operations and Supply Chain Management Luxembourg Center for Logistics and Supply Chain Management University of Luxembourg @. @.> @.***

odow commented 1 year ago

The new file detected 0 state variables :cry:

julia> model = SDDP.MSPFormat.read_from_file("/Users/Oscar/Downloads/Semiconductor Problem/Semiconductor_Production"; bound = 1e7)
A policy graph with 400 nodes.
 Node indices: 153, ..., 306

julia> SDDP.set_optimizer(model, HiGHS.Optimizer)

julia> SDDP.train(model; iteration_limit = 10)
-------------------------------------------------------------------
         SDDP.jl (c) Oscar Dowson and contributors, 2017-23
-------------------------------------------------------------------
problem
  nodes           : 400
  state variables : 0
  scenarios       : 1.60694e+60
  existing cuts   : false
options
  solver          : serial mode
  risk measure    : SDDP.Expectation()
  sampling scheme : SDDP.InSampleMonteCarlo
subproblem structure
  VariableRef                             : [1, 241]
  VariableRef in MOI.LessThan{Float64}    : [1, 1]
  AffExpr in MOI.LessThan{Float64}        : [10, 10]
  VariableRef in MOI.GreaterThan{Float64} : [1, 241]
  AffExpr in MOI.GreaterThan{Float64}     : [30, 30]
numerical stability report
  matrix range     [1e+00, 2e+01]
  objective range  [1e+00, 2e+03]
  bounds range     [1e+07, 1e+07]
  rhs range        [3e+02, 2e+03]
-------------------------------------------------------------------
 iteration    simulation      bound        time (s)     solves  pid
-------------------------------------------------------------------
         1   3.397168e+06  3.470615e+06  2.059169e-01       500   1
         2   3.498124e+06  3.470615e+06  3.110609e-01      1000   1
         3   3.393959e+06  3.470615e+06  4.276550e-01      1500   1
         4   3.598805e+06  3.470615e+06  5.310380e-01      2000   1
         5   3.397168e+06  3.470615e+06  6.365368e-01      2500   1
         6   3.391796e+06  3.470615e+06  7.482240e-01      3000   1
         7   3.513097e+06  3.470615e+06  8.612199e-01      3500   1
         8   3.379063e+06  3.470615e+06  9.763970e-01      4000   1
         9   3.598805e+06  3.470615e+06  1.094071e+00      4500   1
        10   3.492770e+06  3.470615e+06  1.213675e+00      5000   1
-------------------------------------------------------------------
status         : iteration_limit
total time (s) : 1.213675e+00
total solves   : 5000
best bound     :  3.470615e+06
simulation ci  :  3.466076e+06 ± 5.346601e+04
numeric issues : 0
-------------------------------------------------------------------
odow commented 1 year ago

That semiconductor file isn't a multistage stochastic optimization problem because it has no state variables:

julia> import JSON

julia> data = JSON.parsefile("/Users/Oscar/Downloads/Semiconductor Problem/Semiconductor_Production.problem.json")
Dict{String, Any} with 5 entries:
  "name"        => "DecisionProblem"
  "variables"   => Any[Dict{String, Any}("name"=>"X_0", "obj"=>Any[0.0], "lb"=>Any[0.0], "ub"=>Any["…
  "constraints" => Any[Dict{String, Any}("name"=>"", "rhs"=>Any[0.0], "lhs"=>Any[Dict{String, Any}("…
  "version"     => "MSMLP 1.1"
  "maximize"    => false

julia> for (i, c) in enumerate(data["constraints"])
           stages_present = unique([term["stage"] for term in c["lhs"]])
           println("Constraint $i has variables from these stages: $stages_present")
       end
Constraint 1 has variables from these stages: [1]
Constraint 2 has variables from these stages: [1]
Constraint 3 has variables from these stages: [1]
Constraint 4 has variables from these stages: [1]
Constraint 5 has variables from these stages: [1]
Constraint 6 has variables from these stages: [1]
Constraint 7 has variables from these stages: [1]
Constraint 8 has variables from these stages: [1]
Constraint 9 has variables from these stages: [1]
Constraint 10 has variables from these stages: [1]
Constraint 11 has variables from these stages: [1]
Constraint 12 has variables from these stages: [1]
Constraint 13 has variables from these stages: [1]
Constraint 14 has variables from these stages: [1]
Constraint 15 has variables from these stages: [1]
Constraint 16 has variables from these stages: [1]
Constraint 17 has variables from these stages: [1]
Constraint 18 has variables from these stages: [1]
Constraint 19 has variables from these stages: [1]
Constraint 20 has variables from these stages: [1]
Constraint 21 has variables from these stages: [1]
Constraint 22 has variables from these stages: [1]
Constraint 23 has variables from these stages: [1]
Constraint 24 has variables from these stages: [1]
Constraint 25 has variables from these stages: [1]
Constraint 26 has variables from these stages: [1]
Constraint 27 has variables from these stages: [1]
Constraint 28 has variables from these stages: [1]
Constraint 29 has variables from these stages: [1]
Constraint 30 has variables from these stages: [1]
Constraint 31 has variables from these stages: [1]
Constraint 32 has variables from these stages: [1]
Constraint 33 has variables from these stages: [1]
Constraint 34 has variables from these stages: [1]
Constraint 35 has variables from these stages: [1]
Constraint 36 has variables from these stages: [1]
Constraint 37 has variables from these stages: [1]
Constraint 38 has variables from these stages: [1]
Constraint 39 has variables from these stages: [1]
Constraint 40 has variables from these stages: [1]
Constraint 41 has variables from these stages: [2]
Constraint 42 has variables from these stages: [2]
Constraint 43 has variables from these stages: [2]
Constraint 44 has variables from these stages: [2]
Constraint 45 has variables from these stages: [2]
Constraint 46 has variables from these stages: [2]
Constraint 47 has variables from these stages: [2]
Constraint 48 has variables from these stages: [2]
Constraint 49 has variables from these stages: [2]
Constraint 50 has variables from these stages: [2]
Constraint 51 has variables from these stages: [2]
Constraint 52 has variables from these stages: [2]
Constraint 53 has variables from these stages: [2]
Constraint 54 has variables from these stages: [2]
Constraint 55 has variables from these stages: [2]
Constraint 56 has variables from these stages: [2]
Constraint 57 has variables from these stages: [2]
Constraint 58 has variables from these stages: [2]
Constraint 59 has variables from these stages: [2]
Constraint 60 has variables from these stages: [2]
Constraint 61 has variables from these stages: [2]
Constraint 62 has variables from these stages: [2]
Constraint 63 has variables from these stages: [2]
Constraint 64 has variables from these stages: [2]
Constraint 65 has variables from these stages: [2]
Constraint 66 has variables from these stages: [2]
Constraint 67 has variables from these stages: [2]
Constraint 68 has variables from these stages: [2]
Constraint 69 has variables from these stages: [2]
Constraint 70 has variables from these stages: [2]
Constraint 71 has variables from these stages: [2]
Constraint 72 has variables from these stages: [2]
Constraint 73 has variables from these stages: [2]
Constraint 74 has variables from these stages: [2]
Constraint 75 has variables from these stages: [2]
Constraint 76 has variables from these stages: [2]
Constraint 77 has variables from these stages: [2]
Constraint 78 has variables from these stages: [2]
Constraint 79 has variables from these stages: [2]
Constraint 80 has variables from these stages: [2]
Constraint 81 has variables from these stages: [3]
Constraint 82 has variables from these stages: [3]
Constraint 83 has variables from these stages: [3]
Constraint 84 has variables from these stages: [3]
Constraint 85 has variables from these stages: [3]
Constraint 86 has variables from these stages: [3]
Constraint 87 has variables from these stages: [3]
Constraint 88 has variables from these stages: [3]
Constraint 89 has variables from these stages: [3]
Constraint 90 has variables from these stages: [3]
Constraint 91 has variables from these stages: [3]
Constraint 92 has variables from these stages: [3]
Constraint 93 has variables from these stages: [3]
Constraint 94 has variables from these stages: [3]
Constraint 95 has variables from these stages: [3]
Constraint 96 has variables from these stages: [3]
Constraint 97 has variables from these stages: [3]
Constraint 98 has variables from these stages: [3]
Constraint 99 has variables from these stages: [3]
Constraint 100 has variables from these stages: [3]
Constraint 101 has variables from these stages: [3]
Constraint 102 has variables from these stages: [3]
Constraint 103 has variables from these stages: [3]
Constraint 104 has variables from these stages: [3]
Constraint 105 has variables from these stages: [3]
Constraint 106 has variables from these stages: [3]
Constraint 107 has variables from these stages: [3]
Constraint 108 has variables from these stages: [3]
Constraint 109 has variables from these stages: [3]
Constraint 110 has variables from these stages: [3]
Constraint 111 has variables from these stages: [3]
Constraint 112 has variables from these stages: [3]
Constraint 113 has variables from these stages: [3]
Constraint 114 has variables from these stages: [3]
Constraint 115 has variables from these stages: [3]
Constraint 116 has variables from these stages: [3]
Constraint 117 has variables from these stages: [3]
Constraint 118 has variables from these stages: [3]
Constraint 119 has variables from these stages: [3]
Constraint 120 has variables from these stages: [3]

I really think this is why you need an explicit definition of the state variables in the format. Relying on detection makes it too easy for bugs to slip through.

odow commented 1 year ago

@bonnkleiford I'm about to tag a new release that fixes a couple of bugs: https://github.com/odow/SDDP.jl/pull/606.

After that, I'm not aware of any current issues with the MSPFormat reader, so I'll close this issue. Please open a new issue if you have any trouble with some of your models when testing.

bonnkleiford commented 1 year ago

Hi Oscar!

As I am now moving forward with the benchmarking for the difficult problems with SDDPjl (after finishing MSPPy and QUASAR), I have encountered another new error:

Screenshot 2023-09-12 at 22 03 48

I am not so sure how to deal with it for now.

Thanks.

odow commented 1 year ago

Do you have a reproducible example? What is your train call? Are you setting log_frequency = 0?

odow commented 1 year ago

If you want to turn off printing, do instead print_level.

bonnkleiford commented 1 year ago

HI!

This was my script: model = SDDP.MSPFormat.read_from_file("(09_2)_100") SDDP.set_optimizer(model, Gurobi.Optimizer) SDDP.train(model; time_limit = 30, print_level=1, log_frequency=0)

This are the files (and can also be found in the [initial] MSPLib repo): (09_2)_100.tar.gz

odow commented 1 year ago

Don't set log_frequency=0. It needs to be at least 1. I'll add a better error message.

odow commented 1 year ago

Just do

model = SDDP.MSPFormat.read_from_file("(09_2)_100")
SDDP.set_optimizer(model, Gurobi.Optimizer)
SDDP.train(model; time_limit = 30)

when in doubt, leave the options alone.

bonnkleiford commented 1 year ago

Nice! This worked perfectly!

Thanks a lot!