Open junglegobs opened 2 years ago
Ok, so there does appear to be some funny stuff going on here:
julia> reshape(sum(v.data, dims=1), 24, :)[20:24]
5-element Vector{Float64}:
0.0
0.0
0.0
9.0
4.0
julia> reshape(sum(w.data, dims=1), 24, :)[20:24]
5-element Vector{Float64}:
0.0
0.0
0.0
0.0
9.0
I would need to look more into the locations, i.e. which units are being turned on and off.
Investigating whether a) the above is true across all days and b) whether it's true no matter what the optimisation horizon.
[ Info: T = (5113, 5121)
[ Info: Reserves: none
[ Info: Sum of z for last 4 time steps: [13.0, 13.0, 13.0, 9.0, 4.0]
[ Info: Sum of w for last 4 time steps: [0.0, 0.0, 0.0, 0.0, 4.0]
[ Info: Sum of v for last 4 time steps: [0.0, 0.0, 0.0, 4.0, 9.0]
[ Info: Reserves: probabilistic
[ Info: Sum of z for last 4 time steps: [16.0, 16.0, 15.0, 14.0, 1.0]
[ Info: Sum of w for last 4 time steps: [0.0, 0.0, 0.0, 0.0, 1.0]
[ Info: Sum of v for last 4 time steps: [0.0, 0.0, 1.0, 1.0, 14.0]
[ Info: T = (5133, 5141)
[ Info: Reserves: none
[ Info: Sum of z for last 4 time steps: [11.0, 11.0, 11.0, 6.0, 5.0]
[ Info: Sum of w for last 4 time steps: [0.0, 0.0, 0.0, 0.0, 5.0]
[ Info: Sum of v for last 4 time steps: [0.0, 0.0, 0.0, 5.0, 6.0]
[ Info: Reserves: probabilistic
[ Info: Sum of z for last 4 time steps: [12.0, 12.0, 12.0, 8.0, 22.0]
[ Info: Sum of w for last 4 time steps: [0.0, 0.0, 0.0, 0.0, 14.0]
[ Info: Sum of v for last 4 time steps: [0.0, 0.0, 0.0, 4.0, 0.0]
[ Info: T = (5017, 5025)
[ Info: Reserves: none
[ Info: Sum of z for last 4 time steps: [7.0, 7.0, 7.0, 4.0, 3.0]
[ Info: Sum of w for last 4 time steps: [0.0, 0.0, 0.0, 0.0, 3.0]
[ Info: Sum of v for last 4 time steps: [0.0, 0.0, 0.0, 3.0, 4.0]
[ Info: Reserves: probabilistic
[ Info: Sum of z for last 4 time steps: [8.0, 8.0, 8.0, 8.0, 0.0]
[ Info: Sum of w for last 4 time steps: [0.0, 0.0, 0.0, 0.0, 0.0]
[ Info: Sum of v for last 4 time steps: [0.0, 0.0, 0.0, 0.0, 8.0]
[ Info: T = (5037, 5045)
[ Info: Reserves: none
[ Info: Sum of z for last 4 time steps: [13.0, 13.0, 13.0, 12.0, 13.0]
[ Info: Sum of w for last 4 time steps: [0.0, 0.0, 0.0, 0.0, 2.0]
[ Info: Sum of v for last 4 time steps: [0.0, 0.0, 0.0, 1.0, 1.0]
[ Info: Reserves: probabilistic
[ Info: Sum of z for last 4 time steps: [21.0, 21.0, 21.0, 22.0, 22.0]
[ Info: Sum of w for last 4 time steps: [0.0, 0.0, 0.0, 1.0, 0.0]
[ Info: Sum of v for last 4 time steps: [0.0, 0.0, 0.0, 0.0, 0.0]
So this edge effect disappears if you optimise between 20 and 04:00, especially for the last case (day 210), so I think that this might be something to do with storage / the actual timeseries.
If I run the optimisation for 49 hours, no reserves and with storage, I get the following commitments:
So different generators are being started up, perhaps due to a massive injection of storage?
And without storage:
Not a really fair comparison, since there might be load shedding now, even though the system should not be tight at this point.
And if I extend the optimisation horizon but with storage, the commitment for those timesteps don't change. The discharge of storage also doesn't seem to change significantly at the end of the horizon:
sd_sum = sum(sd.data, dims=(1,2,3))[:]
5587.524135290616
5393.446399585121
5460.665270248335
5161.94658449025
Maren pointed out a mistake in my unit commitment constraint: https://gitlab.kuleuven.be/UCM/GEPPR.jl/-/commit/7c906f1d947b1435f60af0ea393cb1bc6e5ff377
Perhaps this changes matters, let's see.
The commitment now appears to be quite different (no units committed at the end of the day), but perhaps that's due to no reserve requirements.
New results:
[ Info: T = (5113, 5121)
[ Info: Reserves: none
[ Info: Sum of z for last 4 time steps: [14.0, 14.0, 12.0, 0.0, 0.0]
[ Info: Sum of w for last 4 time steps: [1.0, 0.0, 2.0, 12.0, 0.0]
[ Info: Sum of v for last 4 time steps: [1.0, 0.0, 0.0, 0.0, 0.0]
[ Info: Reserves: probabilistic
[ Info: Sum of z for last 4 time steps: [15.0, 15.0, 14.0, 11.0, 0.0]
[ Info: Sum of w for last 4 time steps: [0.0, 0.0, 1.0, 3.0, 11.0]
[ Info: Sum of v for last 4 time steps: [0.0, 0.0, 0.0, 0.0, 0.0]
[ Info: T = (5133, 5141)
[ Info: Reserves: none
[ Info: Sum of z for last 4 time steps: [8.0, 6.0, 3.0, 2.0, 2.0]
[ Info: Sum of w for last 4 time steps: [2.0, 2.0, 3.0, 1.0, 1.0]
[ Info: Sum of v for last 4 time steps: [0.0, 0.0, 0.0, 0.0, 1.0]
[ Info: Reserves: probabilistic
[ Info: Sum of z for last 4 time steps: [8.0, 4.0, 3.0, 8.0, 22.0]
[ Info: Sum of w for last 4 time steps: [3.0, 5.0, 1.0, 0.0, 0.0]
[ Info: Sum of v for last 4 time steps: [0.0, 1.0, 0.0, 5.0, 14.0]
[ Info: T = (5017, 5025)
[ Info: Reserves: none
[ Info: Sum of z for last 4 time steps: [8.0, 8.0, 8.0, 0.0, 0.0]
[ Info: Sum of w for last 4 time steps: [0.0, 0.0, 0.0, 8.0, 0.0]
[ Info: Sum of v for last 4 time steps: [1.0, 0.0, 0.0, 0.0, 0.0]
[ Info: Reserves: probabilistic
[ Info: Sum of z for last 4 time steps: [9.0, 9.0, 8.0, 0.0, 0.0]
[ Info: Sum of w for last 4 time steps: [3.0, 0.0, 1.0, 8.0, 0.0]
[ Info: Sum of v for last 4 time steps: [5.0, 0.0, 0.0, 0.0, 0.0]
[ Info: T = (5037, 5045)
[ Info: Reserves: none
[ Info: Sum of z for last 4 time steps: [12.0, 12.0, 13.0, 13.0, 14.0]
[ Info: Sum of w for last 4 time steps: [1.0, 1.0, 0.0, 2.0, 0.0]
[ Info: Sum of v for last 4 time steps: [0.0, 1.0, 1.0, 2.0, 1.0]
[ Info: Reserves: probabilistic
[ Info: Sum of z for last 4 time steps: [18.0, 17.0, 17.0, 22.0, 22.0]
[ Info: Sum of w for last 4 time steps: [1.0, 1.0, 0.0, 0.0, 0.0]
[ Info: Sum of v for last 4 time steps: [2.0, 0.0, 0.0, 5.0, 0.0]
I'm not sure there is an edge effect anymore... Seems somewhat normal to me? Or at least not consistent. Not sure what to do to check this...
I could do the following: solve UC without reserves for 72 hours, fix the storage dispatch and then resolve for 12, 24 and 48 hours, plot the resulting commitment as a function of horizon. I should also send this result to Efthymios.
So It seems that there's no edge effects anymore?
![Uploading commitment_edge_effects.png…]()
Will check with Efthymios.
See #11 - the steep rise in nodal infeasibilities in the last hours of the day make me think something is up... but really don't know what. Since it's infeasible, it's not a cost issue - it's a constraint issue.
Just thought about this now, and perhaps the issue was storage was that depleted by the end of the day? It could be. From the JSON file:
julia> [t => sum([v["e"] for (k,v) in d[t]["store"]]) for t in string.(7410:7416)]
7-element Vector{Pair{String, Float64}}:
"7410" => 31613.25531774399
"7411" => 26593.18142163777
"7412" => 22411.39145265917
"7413" => 16837.278680932173
"7414" => 12889.074808329231
"7415" => 9375.38752051201
"7416" => 6185.229459815895
julia> [t => sum([v["discharge"] for (k,v) in d[t]["store"]]) for t in string.(7410:7416)]
7-element Vector{Pair{String, Float64}}:
"7410" => 4543.888034383307
"7411" => 4607.517509206078
"7412" => 5016.701494554296
"7413" => 3553.3834853426483
"7414" => 3162.3185590354983
"7415" => 2871.142254626503
"7416" => 5566.706513834306
julia> 5566 / 0.9
6184.444444444444
That would make sense, because that would mean that Efthymios has no lee-way in his SCOPF for the last hour of the day. The only way to check this would be to send Efthymios 2 files with 2 different initial / final states of charge.
The issues is that the last for the last hour of the day, load shedding is quite great. I originally thought this might be a unit commitment edge effect, but I'm not so sure anymore - at the very least the generation is within bounds:
Probably need to ask Efthymios for his results file, because it's currently hard to debug.