MATPOWER / most

MOST – MATPOWER Optimal Scheduling Tool, for steady-state power systems scheduling problems.
https://matpower.org/
Other
31 stars 11 forks source link

Psc ~=0 and Psd ~=0 simultaneously, when InEff and OutEff < 100% #14

Closed lordleoo closed 4 years ago

lordleoo commented 4 years ago

Hello, I have a problem, i've been trying to debug my code and solve it for over a week now, albeit in vain. My problem model is small but a bit complicated. I've tried to reproduce the problem on one of MOST tutorials but couldn't. in the meantime, i can summarize the particular problem in few equations:

if the power exchange between an ESS and a grid can be positive or negative, MOST decouples the ESS power into a positive part and a negative part, through the constraint named Ps: 0 <= Pg + Psc + Psd <= 0

Then, what is stopping Psc and Psd from assuming non-zero values, simultaneously, as long as they add up to the right value of Pg?

The problem I'm facing is : Sp and Sm are defined as: - InEff * Psc - 1/OutEff * Psd - Sp(t) <= -Sp(t-1) + OutEff * Psc + 1/InEff * Psd + Sm(t) <= +Sm(t-1) % dropping the iess index for readability; Sp should be Sm(t,iess), and Psc should be Psc(t,j1,iess)`

now, if efficiency is 100%, everything is ok but when efficiency < 100% then i'm facing a situation where: - InEff * Psc - 1/OutEff * Psd = 0 this means that the SoC of ESS stays the same, the ESS does not expend or absorb any energy Pg = Psc + Psd ~= 0 basically, the solution algorithm is creating power from thin air.

here is a numerical example: Psc = -5.5492; Psd = 5.0081; InEff = 0.95; OutEff = 0.95; Pg = Psc+Psd; delta_SoC = InEff * Psc + 1/OutEff * Psd; display(Pg); display(delta_SoC);

You will get: Pg = -0.5411 delta_SoC = -5.5789e-05

one of the culprits, COULD be this combination of cost parameters: cost parameter for Pg(t,j1,iess) is: 0 cost parameter for Psc is: storage_price * InEff cost parameter for Psd is: storage_price / OutEff

from digging deep and looking under the cover in MOST tutorials, i noticed that:

I am aware that i can define binary variables: u_sc = {1 if P_sc <0; 0 if otherwise u_sd = {1 if P_sd>0; 0 if otherwise

and then define a constraint: u_sc+u_sd<=0

I've actually done this before in a different project. but i want to keep my formulation a LP. I am solving my small problem a few million times inside a heuristic optimization problem.

I will try to reproduce my problem in a minimum-working-example, and post it as soon as possible.

The problem goes away when i set the price_dispatchableLoad >>>> price_storage

P.S @Dr. Zimmerman, I informed you earlier that I modified MOST.m to accommodate user-defined variables, user-defined constraints and user-defined costs. I promised to share that, but i haven't had the time to write a manual/guide and generate examples. I haven't forgot that and I have all intention to publish that. i've just had a crazy fall semester.

rdzman commented 4 years ago

You seem to be describing the situation I mentioned in point #1 in this comment under #9. Once I fixed #11, that situation went away in the case I was testing, but the possibility certainly still exists.

The question is why would that solution be the one chosen by the optimizer? It appears that for some reason it is optimal to "burn" energy at that node without changing the state of charge. If this is a deterministic problem, it seems to me this may indicate that charging the storage to max capacity is not sufficient to keep the nodal price from going negative, so the next best thing is to just "burn" energy?

Do you agree?

By the way, one way to interpret this simultaneous non-zero charging and discharging is simply that the storage is charged and discharged rapidly within a single period of the optimization. By controlling how fast and deep you cycle, you can determine the amount consumed by charge/discharge inefficiencies.

lordleoo commented 4 years ago

yes this makes a lot of sense now. i forgot about that post, i apologize for this.

I understand from a "cost" point of view why it was advantageous for the BESS to burn energy, i just didn't know how to stop this. i can prevent it by introducing binary variables but that'll increase the solution time big time.

so for the record, here's what i was doing. instead of enforcing a hard-constraint on ramping between periods, i let the generator (wind) ramp as much as it likes, but when Rrp_actual >= Rr_threshold i apply a penalty (high cost). the way to do it is, of course, introduce a new variable: Rrp_new, and Rrp_new has a high cost term in the objective function the constraint governing Rrp_new is: Rrp_actual - Rrp_new <= Rrp_threshold (same goes for Rrm, new variable, new constraint, new cost term)

i had set the penalty/cost term on Rrp_new so high, and set a very small negative cost for wind-generation (to motivate dispatching wind and discourage wind curtailment) as a result, MOST decided to burn energy in BESS rather than curtail wind.

lordleoo commented 4 years ago

I came with a partial fix for the "BESS burns out power" issue

Here's a short recap of what's happening: Pg(iess) is non-zero eta*Psc(iess) + Psd(iess)/eta = 0;

so basically, when Pg(iess) < 0, then the BESS is sucking in power and burning it in secret and when Pg(iess) > 0, then the BESS is creating energy from thin air (yes i experienced this)

this is because of the decomposition of Pg(iess) into positive and negative parts this decomposition in mathematical programming is common. x_undefinedSign = x_positivePart + x_negativePart

both positive and negative parts could be non-zero. it doesn't matter because at the end of the day the decision variable of interest x_undefinedSign is the sum of both.

in the case of BESS, we have another player in the game: Sp and Sm represent bounds on the SoC of the BESS, and they are defined as: Sp(t) - Sp(t-1) >= - Psceta - Psd/eta of course in MOST.m, this constraint is represented in the form Axb such that all decision variables are on one side.

the other constraint for Sm would be Sm(t-1) - Sm(t) <= - Psc*eta - Psd/eta

I'll cut the story short, i managed to improve the situation a little bit, but not totally eliminate it. Let me first denote the change in SoC between two periods as delta_SoC. This will make the equations easier to understand I will maintain the same sign convention in MOST. Pg(iess)>0 means the BESS is discharging power, injecting this energy to the system Pg(iess)<0 means the BESS is charging up power, drawing this energy from the system

Therefore, when Pg(iess) > 0 --> delta_SoC < 0 (discharging, giving out power, SoC drops) Pg(iess) < 0 --> delta_SoC > 0 (charging up, taking power in, SoC rises)

what you want to do is to prevent the case that Pg(iess) ~= 0 while delta_SoC == 0 or even the absurd case: Pg(iess)>0 and delta_SoC>0 (and i've seen this happening)

you can achieve this in 3 ways: 1) declare binary variables u_charging and u_discharging, and enforce u_charging + u_discharging <= 1 I dont like this one. Mixed-integer programming is much slower 2) apply a penalty on (Psc * Psd). This requires quadratic programming, and the H matrix must be positive-semidefinite. This is basically not doable

bear with me i'm just narrating my train of thoughts.

3) Keep delta_SoC on a leesh, in close proximity from Pg(iess). you can't force them to be equal when eta<1 For example, if eta = 90%, Pg(iess) = +10MW, delta_SoC = -10MWh/0.9 = -11.1MWh Pg(iess) = -10MW, delta_SoC = +10MWh*0.9 = 9MWh

this example above raises the following issues: A) Pg(iess) can not be forced to equal delta_SoC when eta<1 B) when BESS is discharging (Pg(iess)>0, then |Pg(iess)| < |delta_SoC|
and when BESS is charging up: (Pg(iess)<0, then |Pg(iess)| > |delta_SoC|

The fact that Pg(iess) can be positive and negative prevents writing any constraint of the like: Pg(iess) <= delta_SoC * factor

however, we have this rule here (you might want to revisit your middle school math notes) |x-y| <= z can be re-written as -z <= x-y <= z and the region that satisfies this equation is continuous (to understand what i mean, consider the case |x-y|>=z, can not be represented by a continuous region, the answer is: x-y>= z OR x-y<=-z, and there is no such thing as OR constraints)

in our case, x is Pg(iess), and y is -delta_SoC (note the negative)

|Pg(iess) - -delta_SoC| <= something.

now, the smaller the "something" is, the tighter the bound, the better. I know that the largest allowed difference between Pg(iess) and delta_SoC occurs when Pg(iess) is at its PMAX for example, if PMAX = 10MW and PMIN = -10MW when discharging the difference between Pg(iess) and delta_SoC is going to be: |10 - (--10/eta)| = |10(1-1/eta)| so the difference in the case of discharging will be (1/eta - 1)PMAX and similarly, for the case of charging up at max rate, the difference will be: PMAXeta there's my bound: max(1/eta-1, eta), which will turn to be 1/eta - 1 notice what this implies: when eta = 1, the width of the bound is 0. in layman terms, the length of the leesh = 0. and Pg(iess) sticks around -delta_SoC0 as eta goes smaller, the leesh gets longer.

i could think of an even tighter bound, i know for a fact that the BESS can NOT discharge more than its current SoC, or charge up more than its (SoC_max - SoC) Think about it this way: (SoC) and (100%-SoC) are at most, at most, equal to PMAX. there's a better bound.

I ran a test and it worked. the discrepancy between P_ESS and delta_SoC is not what i'd like it to be, but it's definitely much much less than what i was getting.

Here's the code for a quick test on any case file you have

% here i've already built the MOST instance, and i'm adding new constraints to it.
% the new constraint is named: "ItBurns"
  Ag = sparse(1:n_ess,mdo0.mpc.iess,1,n_ess,n_gen);
  As = sparse(1:n_ess,1:n_ess,1,n_ess,n_ess);
  A  = [Ag, -As.*mdo0.Storage.InEff,  -As./mdo0.Storage.OutEff];
  [lowest_eta, ch_or_disch] = min([mdo0.Storage.OutEff,mdo0.Storage.InEff],[],2);
  % if both efficiences are equal, go with the "OutEff" factor = 1./eta - 1

for iter_ess = 1:n_ess
switch ch_or_disch(iter_ess)
case 1
ul = (1./lowest_eta(iter_ess))-1;
case 2
ul = (1-lowest_eta(iter_ess));
end
end
% you must have had already called om.init_indexed_name before this point
  UL = ul.*mdo0.mpc.gen(mdo0.mpc.iess,PMAX).*ones(size(A,1));
  for t = 1:nt %starting from 2 because Pg(1)=Pg(1) philosophically
    for j2 = 1:nj_max %starting from 2 because Pg(1)=Pg(1) philosophically
    vs = struct('name', {'Pg', 'Psc','Psd'}, 'idx', {{t,j2,1}, {t,j2,1}, {t,j2,1}});
    mdo1.om.add_lin_constraint('ItBurns', {}, A, -UL, UL, vs);
  end
end
rdzman commented 4 years ago

Let me step back from the details for a second and comment on the overall issue before we dive in to what to do about it.

You have described two cases, both where the SoC at the end of the period is the same as at the beginning (∆SoC = 0), but the average injection from the unit over the period (Pg) is non-zero.

The first case involves "burning" power. That is, we have Pg < 0. In general, I do not see this is a problem. It should not be optimal unless the price of energy at the bus is negative (i.e. it helps the overall objective to burn power at that node). Physically, it simply means that the unit is being charged and discharged rapidly within the period and the charging/discharging inefficiencies result in losses. This sort of energy burn by a battery is physically possible and could be utilized if it were optimal to do so. My only concern here is that I don't think the magnitude of this "burn" in the MOST implementation is necessarily limited to what would be physically feasible.

However, the second case you say you have observed involves "creating power from thin air". That is, Pg > 0. This should be impossible if ∆SoC = 0 and charging and discharging efficiencies are < 100%, as can be seen directly from the definitions of ∆SoC and Pg. ∆SoC = -eta_in Psc - 1/eta_out Psd Setting ∆SoC to zero and solving for Psd, gives Psd = -(eta_in eta_out) Psc, resulting in ... Pg = Psc + Psd = (1 - eta_in eta_out) Psc The fact that (1 - eta_in * eta_out) > 0 and Psc <= 0 means that Pg must be <= 0 as well.

So, if Pg > 0 when ∆SoC = 0, I consider this a bug.

So let's start by confirming and fixing that bug. Can you create a minimum working example that allows me to reliably reproduce that case?

Once we've dealt with that, we can consider how we might limit the "burn" to what is physically feasible. Working toward a way to entirely eliminate the possibility for storage to "burn" energy like this is something you'd have to convince me is a desirable goal.

lordleoo commented 4 years ago

noted will get back to you

lordleoo commented 4 years ago

Sorry about that. I am not able to regenerate the case, I might have been mistaken reading the numbers at the first place (because P_ESS = - delta_SoC).

I'd already known what's causing the ESS to burn energy. But i was looking for a way around. I gave the renewable energy sources a negative cost, to motivate dispatching them at their maximum. Instead, the optimizer found a loophole in this arrangement, by burning the energy in the BESS. and this bound actually worked, but it started other problems.

In anyway, the formula i explained above is mathematically correct. | P_ESS + delta_SOC| <= (1/eta - 1) * P_ESSmax