Closed junglegobs closed 2 years ago
Same thing - the model seems to use some sort of primal simplex (I think?), the objective keeps rising but the primal feasibility doesn't improve (residual stays at ~10^5).
So according to my model formulation, I should be applying box constraints, but I'm only doing the following:
gep[:M, :constraints, :MaxAbsNodalImbalance] = @constraint(
gep.model,
[n = N, l = L⁺, y = Y, p = P, i = 1:length(T)],
dL⁺[n, l, y, p, T[i]] - abs_slack_L⁺[n, l, y, p, T[i]] <= d_max[n][i]
)
gep[:M, :constraints, :MinAbsNodalImbalance] = @constraint(
gep.model,
[n = N, l = L⁻, y = Y, p = P, i = 1:length(T)],
dL⁻[n, l, y, p, T[i]] + abs_slack_L⁻[n, l, y, p, T[i]] >= d_min[n][i]
)
This will only further complicate my results however, and in fact I think that I will have to re-run all the AbsImb results.
Same behaviour as before, trying with a lower slack (was originally 10^3, now 1).
EDIT: With lower slack, same shit as before (but may have forgotten to include the GEPPR.jl
file to effect the changes). If I don't have the limits on the imbalances, then it solves in ~ 30 seconds.
EDIT: If I have no penalty in the objective, solves in 15 seconds.
Woo! so if I have a penalty of 1 for the slack variables, I'm at least able to get into the branch and bound part of the algorithm after ~90 seconds. Will increase time out and report back on the non zero values of the slacks.
Hmmm, ok, increasing the time out didn't help. Will reduce slack further, as I need to get to the bottom of this.
Focusing just on the positive slack (summing up the absolute values of the slack):
Ugggh, It seems those edge effects are at play again... I think...
This is the dispatch:
From closer inspection at the commitment, generation and storage scheduling, I don't see much amiss (at least on the aggregate level): storage discharges before and after the spike in PV output (assuming that the load shedding is a legend error, which it seems to be given the lack of load shedding), generation doesn't deviate much around 9 GW and commitment stays at 22 pretty much the whole day.
Reserve dispatch:
I don't know why it's curtailing downward reserves quite honestly... I'm starting to think that the graph I plotted before is not correct in fact.
Amount of slack used increases with the reserve level, which makes sense:
julia> L_sum = sum(abs.(abs_slack_L⁺.data), dims=(1,3,4,5))[:]
10-element Vector{Float64}:
1.3195386222761915e-10
5.551808410209275e-10
1.11003471958627e-9
7.325568510683422e-10
1.122960399646033e-9
58.749230526124364
516.1136333157339
1158.3810487170301
2014.1506625675727
3298.093007761405
Also the slack values are zero for the downward reserve levels interestingly enough:
julia> sum(abs.(abs_slack_L⁻).data)
1.572238173044985e-6
julia> sum(abs_slack_L⁺.data)
-6524.230483567397
julia> sum(abs.(abs_slack_L⁺).data)
7045.487582891521
And the slack is mostly positive for negative imbalances (i.e. activating upward reserves), implying that the model is unable to make the nodal imbalances negative enough in order to satisfy the network reserve level activation constraints...
Looking at the figure for the imbalances at the nodal level, I'm wondering if the slack is activated for nodes which shouldn't have any nodal imbalances. This appears to be the case for Doel at least (see picture of imbalance ranges below). If I investigate the slack value for Doel at a particular timestep, that might be helpful.
Slack values for Doel imbalance (legend = reserve levels)
The slack is negative, implying that the imbalance must be positive (i.e. additional load at Doel) to ensure feasibility. Not sure where to go from here...
Similar figure for Herderen (which should also have no imbalances:)
I cam to the conclusion that the only real reason I can think of for which this should lead to infeasibilities is the unit commitment constraints. If I relax these I get the following for Herderen:
Will try pushing the penalty up further and see if there's an improvement? EDIT: Seems so
I will add an option for slacks or not and see whether the linear model can deal with it.
Ok so this is strange. All model runs from this script work, and the only difference I can concieve of is that I changed the number of reserve levels included in the redispatch. Looking at the above though I thought that this shouldn't change anything since it was the higher reserve levels that were the issue, but I suppose that there is an interaction due to the reserve provision needing to be feasible for all reserve levels.
Ok, so yeah, when I run [this script]() the solver takes much longer to reach the correct solution:
1045807 6.0236457e+08 8.105409e+04 0.000000e+00 405s
1051193 6.0236473e+08 3.072497e+04 0.000000e+00 410s
1055764 6.0236521e+08 2.035136e+03 0.000000e+00 415s
1066285 6.0236522e+08 0.000000e+00 0.000000e+00 418s
Root relaxation: objective 6.023652e+08, 1066285 iterations, 415.33 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 6.0237e+08 0 51 - 6.0237e+08 - - 418s
0 0 - 0 - 6.0237e+08 - - 600s
And then it times out, sad times.
Results from 4 day analysis below. Closing this issue, don't think it was ever an issue to begin with!
For the
main_model_runs
and the runbase_UC=true_DANet=true_RSV=0.0_L⁺=1:10_L⁻=1:10_AbsIm=true)
(opts_vec[39]
) I get infeasibilities. I have since added slacks to the absimb constraints to see what that does and it's feasible, but even after 800 seconds there's no solution. I will try a quadratic penalty now to see if that helps.