Closed haoxiangyang89 closed 1 year ago
Ah nice! I've been trying to find a test-case for this: https://github.com/jump-dev/Gurobi.jl/issues/415.
Can you run it to get the Gurobi log?
You can save the file with SDDP.write_to_file(model, "gurobi_failure.mof.json.gz"; test_scenarios = 0)
I assume what happens is that the solution is at the point of the cone where the dual is not defined.
What happens if you add sum(x) >= 0.0001
for your x in SecondOrderCone()
constraints?
It seems for the constraints are changed and all scenarios need to follow those constraints. What I would like to achieve is to have scenario specific constraints. According to different contingencies, different sets of constraints are applied.
Your parameterize code is incorrect:
SDDP.parameterize(subproblem, support, nominal_probability) do ω
if ω isa Int
# Change generator bounds
for g in fData.genIDList
if g == ω
JuMP.set_normalized_rhs(spUb[ω], 0.0)
JuMP.set_normalized_rhs(sqUb[ω], 0.0)
JuMP.set_normalized_rhs(spLb[ω], 0.0)
JuMP.set_normalized_rhs(sqLb[ω], 0.0)
JuMP.set_normalized_rhs(genRamp_up[ω], 1000*fData.RU[g])
JuMP.set_normalized_rhs(genRamp_down[ω], 1000*fData.RD[g])
end
end
else
# Change line bounds
for l in fData.brList
if ((l[1] == ω[1])&&(l[2] == ω[2]))||((l[1] == ω[2])&&(l[2] == ω[1]))
JuMP.set_normalized_rhs(linDistFlow_ub[l], -1000*fData.rateA[l]);
JuMP.set_normalized_rhs(linDistFlow_lb[l], 1000*fData.rateA[l]);
JuMP.set_normalized_rhs(thermal[l], 0.0);
end
end
end
end
It's not sufficient to just change the current values, you need to set the values for all elements that change in any of the scenarios (i.e., we don't set values back to their defaults).
It seems for the constraints are changed and all scenarios need to follow those constraints. What I would like to achieve is to have scenario specific constraints. According to different contingencies, different sets of constraints are applied.
Your parameterize code is incorrect:
SDDP.parameterize(subproblem, support, nominal_probability) do ω if ω isa Int # Change generator bounds for g in fData.genIDList if g == ω JuMP.set_normalized_rhs(spUb[ω], 0.0) JuMP.set_normalized_rhs(sqUb[ω], 0.0) JuMP.set_normalized_rhs(spLb[ω], 0.0) JuMP.set_normalized_rhs(sqLb[ω], 0.0) JuMP.set_normalized_rhs(genRamp_up[ω], 1000*fData.RU[g]) JuMP.set_normalized_rhs(genRamp_down[ω], 1000*fData.RD[g]) end end else # Change line bounds for l in fData.brList if ((l[1] == ω[1])&&(l[2] == ω[2]))||((l[1] == ω[2])&&(l[2] == ω[1])) JuMP.set_normalized_rhs(linDistFlow_ub[l], -1000*fData.rateA[l]); JuMP.set_normalized_rhs(linDistFlow_lb[l], 1000*fData.rateA[l]); JuMP.set_normalized_rhs(thermal[l], 0.0); end end end end
It's not sufficient to just change the current values, you need to set the values for all elements that change in any of the scenarios (i.e., we don't set values back to their defaults).
Thanks! I fixed the rhs part. The Gurobi error disappears when we add a strictly positive constraints to the second order conic constraint, sum(|x_i|) >= 0.00001. I am not sure if I am running it correctly, but after I train the SDDP and saw the error of Gurobi 10005, I tried to do
SDDP.write_to_file(model, "gurobi_failure.mof.json.gz"; test_scenarios = 0)
I got this error:
ERROR: StochOptFormat does not support writing after a call to 'SDDP.train'.
But when I only set up the model and run the write_to_file, it gives me this error:
ERROR: MethodError: Cannot 'convert' an object of type GenericQuadExpr{Float64,VariableRef} to an object of type GenericAffExpr{Float64,VariableRef}
I have commited the change to my repo. The added constraints are in Line 128-142. Commenting them out will give the error of Gurobi 10005.
I thought about this a little bit more. When we generate the cut, sddp.jl uses the dual value obtained from the solver. Is there an option to derive the dual problem, solve the dual, and then directly use the dual variable's value obtained from this dual problem? My past experience is that when we directly derive the dual (I performed this step by hand), at least we can get a dual value and it is usually much more numerically stable than relying on the dual output from the solver.
it gives me this error:
Yeah you need to call write_to_file
before train
. I'll take a look at the quadratic error. Do you have the full stack trace?
The Gurobi error disappears when we add a strictly positive constraints to the second order conic constraint
The dual is not defined at the point of the cone, because you have a 0 >= sqrt(0)
issue.
Is there an option to derive the dual problem, solve the dual, and then directly use the dual variable's value obtained from this dual problem
No. When we encounter issues like this we usually throw away the basis matrix and start a fresh solve. The issue is that Gurobi.jl lies to us and says it has a dual solution when it actually doesn't.
@haoxiangyang89 how do I reproduce this? I tried running the ms_sddp.jl
file, and I got ci not defined
.
I tried with ci=1
and the constraints you added removed, but it runs okay. Also: what is going on with your bound? It's not monotonic. Try with cut_deletion_minimum = 10_000
.
Closing in favor of https://github.com/jump-dev/Gurobi.jl/issues/415. This doesn't seem to be something we can fix in SDDP.jl.
I am running a conic subproblem for SOCP relaxation of the OPF problem. For the first a few iteration of SDDP, it works fine. However, for the test case I had, at about iteration 25, I saw Gurobi error 10005. The error message is displayed below:
I have turned on the QCPDual option for Gurobi so it can generate conic dual. I am just wondering if this is a general issue or is there any way to improve the stability.
Also I checked the solution output by the simulation results. It seems for the constraints are changed and all scenarios need to follow those constraints. What I would like to achieve is to have scenario specific constraints. According to different contingencies, different sets of constraints are applied.
The test can be found here: https://github.com/haoxiangyang89/disruptionN-1/blob/master/src/convex/ms_sddp.jl. @odow should have access to this repo. Thanks!