kibaekkim / DualDecomposition.jl

An algorithmic framework for parallel dual decomposition methods in Julia
MIT License
19 stars 5 forks source link

How to Print out value of optimization variables? #54

Closed anhphuong-ngo closed 8 months ago

anhphuong-ngo commented 8 months ago

Hi authors,

I tried to print out the result of my model after optimization completed. I used the standard form of JuMP, for example, I have variable z in the 1st stage and variable x in the 2nd stage (z[1:J], binary, x[1:I]>=0). The standard syntax in JuMP should be:

x = model[2, :x]
for i in 1:I:
  println(value(x[i]))
end

z = model[1,:z]
for j in 1:J
   println(value(z[j]))
end

However, it ran into the following errors:

┌ Warning: The model has been modified since the last call to `optimize!` (or `optimize!` has not been called yet). If you are iteratively querying solution information and modifying a model, query all the results first, then modify the model.
└ @ JuMP C:\Users\ango1\.julia\packages\JuMP\027Gt\src\optimizer_interface.jl:695
ERROR: OptimizeNotCalled()
Stacktrace:
 [1] get(model::Model, attr::MathOptInterface.VariablePrimal, v::VariableRef)
   @ JuMP C:\Users\ango1\.julia\packages\JuMP\027Gt\src\optimizer_interface.jl:701
 [2] value(v::VariableRef; result::Int64)
   @ JuMP C:\Users\ango1\.julia\packages\JuMP\027Gt\src\variables.jl:1703
 [3] value(v::VariableRef)
   @ JuMP C:\Users\ango1\.julia\packages\JuMP\027Gt\src\variables.jl:1702
 [4] top-level scope
   @ c:\Users\ango1\MyDual\Testing09.jl:160

It would be great if you could show the way to print out the value of decision variables. Thank you!

hideakiv commented 8 months ago

Hi @anhphuong-ngo, you would have to obtain the primal solutions using DD.primal_solution, fix them in the subproblems, and resolve.

anhphuong-ngo commented 8 months ago

Thank you for your reply @hideakiv . Could you please give more information for your suggestion? Like an example on how to fix the primal solutions using DD.primal_solution or the syntax for it as you mentioned.

hideakiv commented 8 months ago

You might have to make some adjustments if you are parallelizing the program but here is what you can do.

for variables in algo.block_model.coupling_variables
    JuMP.fix(variables.ref, DD.primal_solution(algo)[variables.key.coupling_id], force=true)
end
for (id,m) in DD.block_model(algo)
    JuMP.optimize!(m)
end
anhphuong-ngo commented 8 months ago

@hideakiv Thank you very much for your help. Based on your syntax, I can print out the value of variables. May I have follow-up questions? I figured out that there is a difference in the result between before and after fixing the variables. For example (nscenarios = 2):

Before fixing:

DD.primal_objective_value(algo) = 444.2422965999999
DD.dual_objective_value(algo) = 444.24488649

After fixing: m = 1 (First subproblem, also 1st scenario):

Optimal solution found (tolerance 1.00e-04)
Best objective 2.221211483000e+02, best bound 2.221042028337e+02, gap 0.0076%

m = 2 (Second subproblem, also 2nd scenario):

Optimal solution found (tolerance 1.00e-04)
Best objective 2.221211483000e+02, best bound 2.221042028337e+02, gap 0.0076%

Question 1: Actually, I am not clear the meaning of this DD.primal_objective_value(algo) and DD.dual_objective_value(algo). I have a look into Lagrange Dual, It looks like a summation of objective values of subproblems, doesn't it? Or Is it the objective value of the master problem? Question 2: push!(coupling_variables, DD.CouplingVariableRef() -> Is this the way to inform the DD which is the variable in the first stage?

Can you please kindly share your thoughts about this? Thank you very much for your time and support.

hideakiv commented 8 months ago

In a two-stage stochastic program with scenario decomposition, we are splitting the first-stage variables z=z1=z2 and solving each scenario separately.

  1. DD.dual_objective_value(algo) gives the lower bound of the objective, which is the sum of the subproblem objectives adjusted by Lagrangian duals.
  2. That would be correct.
anhphuong-ngo commented 8 months ago

@hideakiv Thank you very much for your explanation. That is much clearer.