EnergyInnovation / eps-us

Energy Policy Simulator - United States
GNU General Public License v3.0
22 stars 7 forks source link

Move to hourly electricity dispatch #232

Closed robbieorvis closed 3 months ago

robbieorvis commented 2 years ago

This builds on the discussion of other electricity sector improvements in issues #219 and #106 and logs as an issue conversations we've been having already about electricity dispatch.

We should consider moving to an hourly dispatch model to improve dispatch and reliability modeling. I completed a proof of concept of this approach in #106 that can be easily implemented and does not appear to result in a significant increase in runtime.

In evaluating the California EPS it has become very obvious that this improvement is extremely important to evaluate scenarios of deep electricity sector decarbonization with any level of confidence. This is because solar is the dominant low carbon electricity source dispatched, but when evaluating the hourly load requirements and solar availability, it is clear that the current approach of looking at peak annual seasonal demand and resource availability does not capture the reliability limitations of a power system. For example, in the CA model, solar has a pretty high capacity factor during peak hours, but it a zero capacity factor in later hours of the day when demand is high but not at peak levels. Using an annual approach, the model assumes that all demand can be met and even that peak is satisfied. But the real issue is the requirement to meet net demand after removing variable generation. To get that right, we need some representation of hourly load and resource availability. Because the implementation of an hourly modeling approach in #106 proved viable and fast, I just suggest we use this approach.

In estimating reliability, the model could look at load in a year and existing resources and do a modeling pass to see if there is a shortfall in any given hour, after applying the reserve margin. This is likely to result in need for additional resources to cover hours where there is a large net load requirement (net load = demand - variable renewable generation). In California, this is often in the evening hours, which is what I anticipate the modeling will show.

Moving to hourly dispatch will do a much better job representing system constraints and resource availability, improving our ability to model reliability capacity requirements while also allowing us to do economic capacity expansion as outlined in #106. It does likely require us to move back to ALLOCATE AVAILABLE, but as discussed elsewhere, this is required anyway given the limitation of the logit functions to model physical constraints.

jrissman commented 2 years ago

Just noting that the LBL team argued that we did not need to move to hourly dispatch. They claimed we only needed a far simpler set of updates to the power sector. I understand your argument above, but I just want to make sure that we're consciously considering the LBL recommendations rather than forgetting about those recommendations. Did you decide that hourly dispatch is in fact needed because your experience with the California EPS (subsequent to our conversations with LBL) shows that the simpler set of updates they recommended would not be sufficient to address the shortcomings you're seeing related to solar availability?

robbieorvis commented 2 years ago

I think if you are concerned with getting capacity expansion correct, you don’t need a full 8760 dispatch model, because you just have to look at the worst days/weeks of the year. What’s not clear to me is how you get dispatch correct without at least an 8760 model, and the issues I saw in CA made me think it would be much better to move in this direction, not just for the capacity expansion piece, but also for the dispatch piece. It also seems to add almost no run time, so I figured we might as well just go for it if the impact on run time is very small.

As an example, the annual dispatch approach we have now results in hours where greater than 100% of a power plant’s capacity factor is dispatched, or alternatively, allows variable resources to run when they wouldn’t really be able to. That was what the spreadsheet model I put together showed at least. The upshot is: when you look at annual energy availability you can meet 100% of demand with solar, as an example, but there are many hours in which that solar can’t actually generate, even if on annual basis, multiplying capacity by capacity factor gets you enough MWh to meet demand. So the time of day component is really, really important.

From: Jeff Rissman @.> Sent: Tuesday, May 31, 2022 11:43 AM To: Energy-Innovation/eps-us @.> Cc: Robbie Orvis @.>; Assign @.> Subject: Re: [Energy-Innovation/eps-us] Move to hourly electricity dispatch (Issue #232)

Just noting that the LBL team argued that we did not need to move to hourly dispatch. They claimed we only needed a far simpler set of updates to the power sector. I understand your argument above, but I just want to make sure that we're consciously considering the LBL recommendations rather than forgetting about those recommendations. Did you decide that hourly dispatch is in fact needed because your experience with the California EPS (subsequent to our conversations with LBL) shows that the simpler set of updates they recommended would not be sufficient to address the shortcomings you're seeing related to solar availability?

— Reply to this email directly, view it on GitHubhttps://github.com/Energy-Innovation/eps-us/issues/232#issuecomment-1142303185, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AK5N6SIRXH6RXQ6AEMTPZATVMYXQ5ANCNFSM5W3M6AJQ. You are receiving this because you were assigned.Message ID: @.**@.>>

robbieorvis commented 1 year ago

After talking with Anand today, we came up with an approach for modeling charge and discharge of energy storage, which I integrated into the dummy model, attached here. The short of it is we have two passes, the first of which calculates dispatch and prices without storage, and the second pass then assigns storage charging to the four lowest priced hours (the assumption for the storage technology is a 4 hour battery, which means peak discharge = 1/4 of total charge) that discharges during the four highest priced hours. This follows similar technology assumptions used by Lazard in their levelized cost of storage analysis. Using this approach I was able to get hourly dispatch working with storage, which correctly accounts for increased dispatch of fuels to charge the storage and decreased dispatch when storage is discharging Dispatch model test.zip .

jrissman commented 1 year ago

Megan and I reviewed the IEA World Energy Model documentation regarding both how they choose which plants to build and which plants to dispatch. We feel that their dispatch mechanism is not an accuracy improvement relative to our current method with curves and ALLOCATE AVAILABLE and might even be a step backward in accuracy. They use a stairstep "merit order" of plants and then dispatch them for a fixed number of hours per year according to four "buckets":

Demand_Buckets

Merit_Order

Note the X-axis is in MWh, not MW. The World Energy Model doesn't explicitly dispatch to meet peak demand. It just divides total energy demand in MWh into four demand buckets.

This approach would not be properly responsive to sharp changes in demand (like the decline due to COVID in 2020) because it would just cut off the top segment of plants (the total demanded would be lower, so lower stairsteps would fill up to each blue line), which would say that some peaker plant types would not run at all. It would not catch the fact that there were hours in 2020 where peak demand was as high as normal, particularly after lockdowns ended. Also, the stairsteps group plants roughly, whereas the smooth curves of ALLOCATE AVAILABLE are better able to handle a range of different plant performance characteristics and costs.

We have decided to proceed with the full 8760 (365-day, 24-hour) dispatch simulation, since we think World Energy Model has a worse approach than our existing approach as of EPS 3.4 and only moving to a full hourly dispatch model would further improve our accuracy relative to EPS 3.4.

jrissman commented 1 year ago

bdd900f implements hourly dispatch with grid battery charging and discharging. It is not yet connected to the outputs, so it doesn't affect any downstream part of the model yet. Also note that we're using stand-in data in the CSV files in the new input variables. Here are some implementation notes:

jrissman commented 1 year ago

After getting the dispatch mechanism working with grid batteries and with guaranteed dispatch (done using the methodology we spoke about), I turned my attention to beginning to implement power plant vintages. So far, I've added vintages for SYC Start Year Capacities and built the mechanism to track the power plant fleet by vintage (including additions and retirements) in 03a57d4. This requires a challenging bit of code to step through the elements of the Vintage subscript, which took quite a while to get right.

I then tried to insert the new tracking mechanism in place of the old one, and that caused errors. It looks like the issue is that various elements of the old tracking variables are woven throughout the code in many places, and I was only swapping in for the core tracking piece, not every instance where any involved variable was used. Even a very tiny difference in SYC seems to throw these other parts of the model off. I verified that if I make sure the new mechanism's values match the old ones in SYC exactly, it does not produce errors.

One option is to try replacing them all at once and hope it works or can be readily debugged. However, I am wary of changing too many things at once, due to Vensim's poor ability to identify where errors are in the code. I'm starting to think that my next step should be to break out the NG nonpeaker plant types into steam turbines vs. combustion turbines as unique plant types, which would then allow for these variable swaps to occur more easily and with less chance of causing runtime errors. Really, it would be ideal to get rid of the Power Plant Quality subscript entirely before swapping in the vintaging system. This would allow me to swap in new, vintaged variables for the old variables one-at-a-time while keeping the model in a running state, which is important for being able to find and fix bugs. Of course, if I get rid of the newly built quality tier, all plants of a given type will have the same properties, but as long as that doesn't cause model errors, it should be tolerable as an temporary state of affairs until I can get the vintaged variables in place. (The vintaged variables will have different properties by year, which I will initially set to use the EPS 3.4 preexisting retiring values for years prior to 2020 and newly built values for years 2020 and later, to mirror the results from EPS 3.4. We can later use our new vintaging capabilities to specify different plant properties for any number of past or future years, which will give us great flexibility and power compared to what we had in EPS 3.4.)

I know we want more plant types beyond just this NG nonpeaker split, but I think we should handle that as a separate programming step, once we get the vintaging system fully working with existing plant types. While it's annoying to have to add plant types twice instead of doing them all at once, when working on such complex and interconnected parts of the code, it's extremely important to program defensively in a way that minimizes bugs and maximizes our ability to get the new features working. Changing too many things at once is too risky.

jrissman commented 1 year ago

After the 19 commits listed above, we're at a milestone: the model now runs and produces somewhat sane results, with:

This is still using the old power plant construction, retirement, and dispatch systems. It most likely isn't worth the time to get the results to look perfect using those systems because those systems are going to be greatly revised or replaced. But I did want to get the model at least to run without errors using the old systems before going on.

Even though the hourly dispatch is presumably faster without quality tiers, the model overall runs much slower because the new Vintage subscript has many more elements than the old Power Plant Quality subscript, and the Vintage subscript is used extensively - there are actually 827 references to "Vintage" in the code. At some point, we're going to need to review where Vintage is used and try to avoid letting it flow through large parts of the electricity sector. My plan for this is:

This should produce identical or near-identical results since we don't display final results by Vintage anyhow. It should greatly reduce the runtime impact from Vintage.

If that's not enough, we could also consider taking the vintage back only to the earliest year when you expect plant properties to vary (say, heat rates and O&M costs) and grouping all plant types older than that into the earliest year bucket. In the U.S., the only types of capacity built prior to the 1940s that are still operating today are hydroelectric and a small amount of petroleum-fired capacity, so we could consider grouping everything built before 1940 into the 1940 bucket, rather than going all the way back to 1891. Also, we might be able to avoid sending future year vintages (full of zeroes) through so much of the model, perhaps by making a sub-range that only includes relevant vintages, though making that fully input data-driven would be hard.

But I think my next focus is going to be to replace the existing power plant construction, retirement, and dispatch systems. This will include removing the flexibility point system, which might improve run speed.

Finally, note that I've sent an email to Ventana Systems asking:

Do you have a benchmarking tool that can report the amount of time (in milliseconds) that Vensim spends calculating each variable when the model runs? I could really use a benchmarking tool to help me identify how much time each variable takes to calculate, so I could know where to focus my optimization efforts. For example, it could output a text file when the model runs listing all the variables and the number of milliseconds spent on each variable.

I don't need the tool quite yet, but I will want it once I'm a little farther along. If they don't have anything, we likely will want to try converting the model to C code using SDEverywhere and profiling the C code using 0x or Clinic Flame.

jrissman commented 1 year ago

I heard back from Ventana Systems. They don't have a profiling tool today. They might consider adding one as a Vensim feature based on my feedback. I suggested some design guidance for that feature. Hopefully they will build it.

They did share this link to a blog post with some general ideas for optimizing Vensim model runspeed. It is no replacement for a profiling tool, and not all of its ideas are relevant to us, but I'll past the link here so we can review it during optimization-focused work after the new features are built: https://metasd.com/2011/01/optimizing-vensim-models/

robbieorvis commented 1 year ago

I just had one more thought for optimization. You could just model hourly electricity using ALLOCATE AVAILABLE dispatch for NET load, meaning first you would dispatch all the renewables (up to the maximum demand) using something other than allocate available, such as just multiplying by the hourly capacity factor and the capacity, then just do the allocate available on dispatchable resources to meet net load, after the type of averaging you mentioned for things like heat rate across vintages. I don’t think there would be much of a an accuracy hit and it could significantly cut down on the complexity of allocation.

For example, you could remove solar PV, onshore wind, offshore wind, conventional hydro and nuclear, at least, from the allocation, and maybe more like geothermal and biomass if we assume they run at fixed rates, which is reasonable.

We’d still need a way to curtail in hours where the total supply exceeds demand, but that might be easy to manage instead of having allocate available do it.

Sent from my iPhone

On Aug 18, 2022, at 4:09 PM, Jeff Rissman @.***> wrote:



I heard back from Ventana Systems. They don't have a profiling tool today. They might consider adding one as a Vensim feature based on my feedback. I suggested some design guidance for that feature. Hopefully they will build it.

They did share this link to a blog post with some general ideas for optimizing Vensim model runspeed. It is no replacement for a profiling tool, and not all of its ideas are relevant to us, but I'll past the link here so we can review it during optimization-focused work after the new features are built: https://metasd.com/2011/01/optimizing-vensim-models/

— Reply to this email directly, view it on GitHubhttps://github.com/Energy-Innovation/eps-us/issues/232#issuecomment-1219912058, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AK5N6SIY6IMMYOEHTTO23PTVZ2KA7ANCNFSM5W3M6AJQ. You are receiving this because you were assigned.Message ID: @.***>

jrissman commented 1 year ago

We have one free top-tier graph menu option available to use for sub-annual electricity dispatch outputs. This allows us up to 20 graphs. Megan and I suggest the following new graphs.

For models that run through 2060 (currently just the china-igdp model), we can have 2060 versions of the graphs above, which brings the total number of graphs to 20, which is our limit. It probably won't be very many years until the us model runs through 2060, depending on when EIA extends their AEO through that date, so these graphs effectively fill the new top-tier graph menu item.

This will require minor work on the web app to support non-year values on the X axis.

jrissman commented 1 year ago

Today, I tried implementing a new electricity dispatch allocation approach that replicates the functionality of ALLOCATE AVAILABLE using pure mathematics in Vensim, with no calls to Vensim functions. My hope is that this would accelerate the model runtime by avoiding any inefficiencies Ventana might have included in their implementation of ALLOCATE AVAILABLE, particularly because they support a wide variety of curve shapes, whereas we only need a standard normal distribution.

I first built the approach in a separate, small test model. I’ve pushed it to a branch called math_based_alloc and the filename is Math-Based Allocation Test Model.mdl. This was challenging to create, and the test model is a good illustration of how to do it, as well as how to loop through a subscript involving a large number of variables. I’m actually using a slightly faster version of the cumulative probability distribution function (CDF) than the one used by the web app. When I delete the ALLOCATE AVAILABLE variable from the test model (and delete a few other extraneous calculations) and time it with these parameters:

It completes this in roughly 34 seconds. If we multiply the numbers above, that’s 3.54 * 10^9 elements allocated in 34 seconds, or roughly 10^8 elements allocated per second.

The EPS using the new math-based allocation has:

Multiplying these, that’s 1.12 * 10^8 elements. That’s much less than the number of elements allocated in my test above. My test suggested Vensim could crank through around 10^8 elements per second using the math-based approach, so that should give about a 1-second runtime for the allocation in the EPS.

I went ahead and built the math-based allocation in the EPS and tried to run it. The problem is that it doesn’t run at all. It passes Vensim’s error check, then Vensim chokes and can’t begin the simulation. My guess is that the problem is that Vensim is not designed to remember large amounts of data in the same timestep. Even though it has to allocate more total elements in my test model when set to 1 million timesteps vs. the number of elements to allocate in the EPS, Vensim performed much better when the “timesteps” range was very large and the other ranges were small. When the other ranges are large and the timesteps are small, it works slowly or, in this case, crashes.

Vensim must be optimized with some special assumptions regarding its time subscript. For instance, maybe it overwrites each element in memory in the previous timestep when it makes the new timestep, but it can’t do the same for any other subscript (such as “Allocation Pass”) because it doesn’t know that those values won’t be needed for further calculations in the same timestep. It effectively makes more information available in later calculations because you could reference any prior allocation pass in a later allocation pass, but you can’t reference earlier timesteps in later timesteps (without putting in an explicit DELAY FIXED variable that carries a value forward to the timestep you need it). So, it seems like Vensim is probably trying to remember too much stuff when too many non-timestep elements are applied to the same variable.

Since Vensim seems better about handling timesteps rather than subscript dimensions we create, that makes it tempting to devise a way to use an hour timestep, which could move 8760 elements from other subscripts into the time axis. But an hour timestep would not work with the rest of the model. If Vensim allowed sub-models with different timestep sizes, that would be interesting, but the documentation assumes a one-to-one mapping of variables between parent models and sub-models, with no provision for dealing with differences in timestep.

So, in the end, this is disappointing, and it is an indication that as long as we’re using Vensim, we may have to live within its limitations. I think we can go to hourly dispatch across 17 plant types, but we can’t add another 25-element subscript (25x the data usage), or Vensim crashes.

If we want to keep using a dispatch method that involves converging on a solution, such as the bell curve approach we have now, it looks like we need to stick with ALLOCATE AVAILABLE because it is designed under the hood in such a way that it can handle this amount of data without Vensim choking. I imagine it has at least 25 allocation passes internally, but the developers knew what ALLOCATE AVAILABLE is doing and that not all the intermediate values need to be saved, so it’s built not to remember all those intermediate steps and is thus more memory-efficient than the pure math method, and that may be making the difference here, even if the pure math approach might be faster in theory.

I’m going to go back to working from branch “#232” and not work further from the “math_based_alloc” branch, but that branch will be a place where today’s work is permanently saved in case it is later needed or useful for reference. I am going to focus next on replacing the power plant construction piece, plus possibly some more Vintage optimizations.

We’ll stick with 8760 hours right now. Closer to the end of developing this feature, if we still need more run speed, I think the most promising thing to do is to find a way around simulating all 8760 hours. This would involve using “HELF Hourly Equipment Load Factors” and “HECF Hourly Electricity Capacity Factors” in Excel to identify which days we want to simulate, and then exporting hourly ELF and ELC data only for those dates, rather than for all 365 dates. But I’m going to try to avoid the need for us to do that, if I can.

jrissman commented 1 year ago

71d444d implements a calculation of marginal dispatch cost by hour, which we will use to determine cash flows and which plants are economic to build and retire. It generally follows the approach Robbie developed in the test model using a lookup table of z values corresponding to shares of the area under the normal distribution curve. A few small differences:

A big thank you to @robbieorvis for developing this approach!

jrissman commented 1 year ago

There has been a lot of progress in the last couple days. The highlights are:

The biggest issue right now is that the estimated revenue that the power plants get (from the hourly electricity market prices) are too low to make any plant types profitable in most hours of the year. This is incorrect. The discovered market prices are based on dispatch costs and therefore don't reflect the full cost of operating the plant, which includes fixed O&M plus amortized initial capital expense being paid over the plant's lifetime. Therefore, plants must charge for cost recovery for their fixed O&M and capital expenses. We'll need to factor these charges into the revenue estimate.

Or, Robbie mentioned that plants also get revenue from participating in a capacity market, which is additional to the revenue they get from dispatching electricity. We should potentially add a capacity market with payments per MW of capacity available by hour. If energy (MWh) and capacity (MW) markets are the only ways that plants get revenue, and this revenue generally covers their fixed O&M and their amortized capital expenses, then we may only need to add the capacity market and not worry about directly adding any cost recovery of fixed O&M or amortized capital expenditures.

Beyond that, other key "to do" tasks that remain are:

jrissman commented 1 year ago

The latest commits implement EV batteries, demand response, pumped hydro, and electricity imports and exports - specifically their effects on hourly dispatch.

Megan and I had a long discussion about whether we can calculate the effects of these things on demand by hour all at once, or whether it needs to be sequential, and we ended up deciding this one needs to be sequential (unlike the conditions for amount of capacity to build, which are now evaluated all at once). I won't go too deeply into the reasoning here, but a high-level summary is:

Each demand-altering technology has a different total capacity per day and a different number of hours on which it acts per day. For instance, a grid battery might be able to store 4 MWh of electricity but has a capacity of 2 MW, meaning that no more than 2 of the 4 MWh can be discharged in any particular hour. Therefore, we can't just add all the demand reductions and demand additions possible from each demand-altering technology before assigning them to hours (via an allocation process, with the hours as the demanders) because this wouldn't account for the limits of each technology to act on each hour. There is also the opposite problem, where a demand-altering technology runs out of energy shift capability before it runs out of capacity. For example, if the battery above could store 3 MWh but has a capacity of 2 MW, then it can only discharge 2 MWh in the highest-demand hour and 1 MWh in the second-highest-demand hour, which is below its capacity of 2 MW, because it ran out of stored electricity. These types of per-hour limits can't be accounted for in a single, simple allocation step like a logit function or ALLOCATE AVAILABLE, but rather, require a proper loop (such as a "for" loop) to check various conditions as the demand changes are assigned. Vensim is awful at handling large and complicated loops through subscripts. (For example, I built one to recreate ALLOCATE AVAILABLE using pure math in branch "math_based_alloc" and Vensim crashed rather than run it.) The sequential system we have right now is able to be handled efficiently via Vensim and is the right choice for implementation in this particular instance.

Remaining to do items:

jrissman commented 1 year ago

Based on a detailed review by the Power sector team, here are some ideas of things we may do (roughly in this order):

Also note that for the DR provided by EVs, keep the input variable Fraction of EV Battery Capacity Used for Grid Balancing set to zero for early years of the model run and have it phase in slowly thereafter. Make it a time-series input variable.

jrissman commented 1 year ago

Changing shift DR to operate only in the top X hours per year is too simplistic because there is no requirement that all electricity utilities nationally call upon their DR in the same hours as one another. So even if each utility has an agreement to use their DR only in the top, say, 20 hours per year, the actual use of DR will be spread out among more than 20 hours per year nationally, since the utility in Texas might pick some different hours from the utility in Maine (even if each utility individually limits itself to 20 hours).

Also, we can't shift from the peak hours of the year to the lowest hours of the year because those times are separated by months, and shift DR only shifts demand a matter of hours, not months. We aren't modeling shed DR right now.

The old approach assumed utilities would spread their requests equally throughout the year, but that was too much spreading because both the utility in Texas and the utility in Maine might select hours in the summer and winter, not hours in the spring and fall. We want to limit the hour selections to particular days, not every day.

Here's a different approach:

Keeping the shifts within the same day is important for not violating our code that protects against overshooting (shifting too much in a given direction).

If we want something fancier that doesn't assume utilities spread their DR evenly among the 60 DR-requesting days, we need to spread out the capacity based on need. To do that:

This way better captures some of the regional variance that will make some days particularly heavy DR users and other days less DR users.

jrissman commented 1 year ago

fdccf0b overhauls DR handling. I ended up using a streamlined version of the second approach detailed in my previous post, because this is more accurate than assuming an equal amount of DR is used on each date/event when DR is used. I think this is a big improvement over what we had before, and also an improvement over simply applying DR to the top X hours per year nationally.

There remain many more things to do for issue #232, the most important of which at this point is adding a capacity market to aid in fixed cost recovery.

jrissman commented 1 year ago

I tried implementing capacity market payments based on the amount of additional revenue needed to get plant types that are built for reliability purposes to not incur losses, following a design discussed with @robbieorvis some time ago. It quickly became apparent that I needed to first fix some issues with the reliability-based construction before it could provide meaningful numbers for use in a capacity market. The following improvements were made to reliability-based construction:

Without any cost-based construction or retirements, the changes above produce a somewhat reasonable-looking set of reliability-based choices, mostly a mix of solar PV and natural gas peakers. I anticipate the need for peakers will be lower once we have cost-based construction in there.

I then finished building the capacity market system. I won't get into the details of the design choices here, because I think the approach doesn't work and we need to do something else. Even after carefully filtering the capacity payments (e.g., to try to exclude outliers when finding the marginal plant, etc.), it is almost impossible for the capacity payment amounts to land in a place where they make some plant types economic and others uneconomic, which is a small target range to hit. The other problem is that the target range is unstable because it depends so strongly on what plants retire, which can trigger the need to build capacity for reliability, and it's dependent on what was built in prior years in the cost section, which can reduce or eliminate the need to build plants for reliability in a given year. Even if I could fiddle with the data and equations to somehow align these things for the U.S. model, it would not work for other regions and would break with every U.S. data update. We cannot have capacity payments be so sensitive to calculated values that are influenced by so many things - it is a recipe for chaos (in the mathematical sense of the term, where the slightest deviation from initial conditions produces large and unpredictable swings in the results).

I think my next step will be to remove this capacity payment system and add one based on cost recovery of fixed assets, staffing/O&M, etc.

One thing that makes this difficult is that it's not entirely clear what the desired behavior is. Do we expect at least some capacity to be built every year? If so, do we expect it to be cost-driven, reliability-driven, or both in every year? Do we expect at least some cost-based retirements in every year? Fow now, trying to get the behavior closer to EPS 3.4.2 seems like the best thing to do.

If need be, I can consider zeroing out the mandated construction in the first two years, which can cause strange effects and prevent the regular reliability- and cost-based systems from kicking in until a few years into the model run. Ultimately, we do need this model to work well even when the first few years are defined using mandated capacity construction.

jrissman commented 1 year ago

One other thought from today: BECF BAU Expected Capacity Factor needs to be replaced with a calculated value based on HECF Hourly Electricity Capacity Factors and RAF Resource Availability Fraction to ensure that BECF and HECF are always aligned with each other. Otherwise, we can be exposed to strange results where utilities build stuff on the basis of BECF and it performs dramatically differently using HECF.

But BECF is one of our vintaged input variables, and of course it would be impossible to vintage HECF. One option is to no longer vintage BECF, which might be okay for thermal plant types but probably loses some accuracy regarding future improvements in wind turbine or solar panel technology that could increase the capacity factors of those technologies. A different option is to turn BECF into a vintaged multiplier that is set to 1.0 in the model's first year and can be set to other values (such as 1.2) for future vintages of particular plant types. (It could also be set to values below 1.0, such as 0.8, for older vintages that perform poorly compared to modern plants.) We then will need to multiply HECF by the vintaged capacity factor multipliers, effectively turning HECF into a time-series variable. That's probably the route I'll go, since I don't want to lose the ability to have different capacity factors for plants of different vintages. We'll need a weighted average of the multipliers that we track as we add and remove plants, just like the weighted average BECF today, for speed optimization purposes.

We'll also need to do everything we can to speed optimize the variables using the "Hour, Day, Electricity Source" subscripts, but that needs to wait until we are happy with how the plant building, retiring, and dispatch systems work.

jrissman commented 1 year ago

Commit dfa4ad3 implements a more robust and mathematically tractable approach to capacity payments than the approach described in yesterday's comment. Capacity payments are now based on physical electricity system needs, rather than trying to derive a cost from the marginal plant in the reliability calculations, to avoid involving excessive contingencies and prevent chaotic behavior. Capacity payments now much more closely resemble how they work in the real world (based on a brief discussion I had today with Mike), where the same payment per MW is offered to all plants with similar availability (e.g., nuclear, NG steam turbine, NG combined cycle, NG peaker, coal, etc.) irrespective of whether the plants are dispatchable, while plants with lower capacity factors (primarily renewables) receive significantly lower capacity payments. The base rate for capacity payments is set based on the rate necessary to allow utilities to break even on their fleet-wide capital + operating costs vs. revenues, with neither profits nor losses. (Of course, utilities may in fact charge more than this in order to have a profit margin and repay financing charges on past capital equipment purchases, but such charges should be handled in electricity pricing and should not be affecting the choice of which plants to build.) The EPS adjusts the capacity payments based on the plants' availability specifically in hours with the most unmet electricity demand after guaranteed dispatch and after demand-shifting technologies. (In the BAU case, that adjustment is not very important, but it might be more important in some high-renewables policy cases.)

I calibrated CRtPaL Capacity Response to Profits and Losses for profits, to prevent over-building of some marginal plant types or plant types the U.S. doesn't use, such as crude oil-burning plants.

I was also able to enable cost-driven retirements for the first time, as the profitability figures for plants are finally good enough to allow use of this code. The results aren't accurate yet, but at least the code can now be used and the CRtPaL loss parameter can be calibrated.

robbieorvis commented 1 year ago

Hi Jeff, please let me know if you want to meet to chat more about this and your progress.

From: Jeff Rissman @.> Sent: Thursday, October 6, 2022 11:29 PM To: EnergyInnovation/eps-us @.> Cc: Robbie Orvis @.>; Mention @.> Subject: Re: [EnergyInnovation/eps-us] Move to hourly electricity dispatch (Issue #232)

One other thought from today: BECF BAU Expected Capacity Factor needs to be replaced with a calculated value based on HECF Hourly Electricity Capacity Factors and RAF Resource Availability Fraction to ensure that BECF and HECF are always aligned with each other. Otherwise, we can be exposed to strange results where utilities build stuff on the basis of BECF and it performs dramatically differently using HECF.

But BECF is one of our vintaged input variables, and of course it would be impossible to vintage HECF. One option is to no longer vintage BECF, which might be okay for thermal plant types but probably loses some accuracy regarding future improvements in wind turbine or solar panel technology that could increase the capacity factors of those technologies. A different option is to turn BECF into a vintaged multiplier that is set to 1.0 in the model's first year and can be set to other values (such as 1.2) for future vintages of particular plant types. (It could also be set to values below 1.0, such as 0.8, for older vintages that perform poorly compared to modern plants.) We then will need to multiply HECF by the vintaged capacity factor multipliers, effectively turning HECF into a time-series variable. That's probably the route I'll go, since I don't want to lose the ability to have different capacity factors for plants of different vintages. We'll need a weighted average of the multipliers that we track as we add and remove plants, just like the weighted average BECF today, for speed optimization purposes.

We'll also need to do everything we can to speed optimize the variables using the "Hour, Day, Electricity Source" subscripts, but that needs to wait until we are happy with how the plant building, retiring, and dispatch systems work.

— Reply to this email directly, view it on GitHubhttps://github.com/EnergyInnovation/eps-us/issues/232#issuecomment-1271070107, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AK5N6SJKZEFT4E25AQ6LNC3WB6KIFANCNFSM5W3M6AJQ. You are receiving this because you were mentioned.Message ID: @.**@.>>

jrissman commented 1 year ago

I wanted to put here some materials from an NREL training video on their ReEDS capacity expansion model. Here is the list of standard outputs from the ReEDS model:

ReEDS Standard Outputs

"OpRes" refers to "Operating Reserves". Operating reserves are defined (by CAISO) as follows:

Operating reserves are the electricity supplies that are not currently being used but can quickly become available in the case of an unplanned event on the system, such as a loss of generation or when real-time demand is higher than forecast.

Note that ReEDS finds separate prices for energy, for OpRes, and for capacity (with OpRes on an hourly basis and capacity on an annual basis). They also have total system cost, which would include costs of transmission and storage.

"Timeslice" refers to one of the 17 timeslices that ReEDS breaks the year into. ReEDS doesn't simulate each hour of the year but instead use 17 timeslices to limit computational runtime.

Our 3.5-wip calculated energy prices from LCOE and reverse-engineering the ALLOCATE AVAILABLE bell curves, and we just added an approach for capacity/OpRes payments.

One thing we should do is to incorporate required transmission and storage investments (per MW) into the prices of new generation resources, which will then affect what types of resources get built. This also satisfies requests from reviewers, such as those at LBNL, to have endogenously-built transmission and storage. We can report separately the costs for transmission, for storage, and for capacity construction once we know what types of generation are getting built, with no need for a second allocation, since we know how much storage and transmission was needed per MW of each resource built.

I think capacity expansion models might not try to calculate energy costs, OpRes costs, and other costs from first principles, but rather might try to build out the electricity grid in whatever way that meets electricity demand while minimizing total system cost (and potentially with other constraints like an RPS/CES), and then they charge whatever electricity rates are necessary in order to cover that cost. That approach seems fine in theory, but one still needs a way to break down costs to specific drivers such as building each type of plant as well as transmission or storage, electricity imports, etc., and how to distribute those costs between energy payments, OpRes payments, etc., and among different hours of the day. These assignments are needed so that that costs are properly affected by policy - for instance, a policy that encourages more use of demand response.

Finally, there will be the question of whether to try to align the endogenously-calculated electricity rates from the electricity secto rmodel with annual average electricity rates seen by electricity users in all the other model sectors (which today come straight from input data, before any policy modifications). The cleanest way programmatically would be to change the prices charged to electricity buyers in all sectors such that they no longer come from input data and instead come endogenously from the new electricity sector model. That would make sense in terms of model structure, though it might mean that model adapters have to spend some effort tweaking the electricity sector model if they want the final electricity prices to come out similar to those in a published data source like EIA AEO. (We could also give model adapters the ability to override the endogenously-calculated electricity prices and use exogenous ones from a source like AEO, though in that case, the EPS would not guarantee that the total revenue would actually cover electricity system costs.)

robbieorvis commented 1 year ago

Thanks for this, Jeff. A few comments:

  1. The 3.5-WIP calculated energy prices based on dispatch cost, not LCOE (unless you changed that), similar to how it’s done in the market. That would be energy + variable O&M costs + policy costs like a carbon price.
  2. I like the idea of integrating transmission and storage costs, though we have to be careful here that we don’t fall into the trap of imposing all these costs on renewables. I’d bet Mike, Anand, and Eric have some thoughts here. Generally speaking though, agree on the need to model transmission costs and storage where economic.
  3. Note that rates include not only generation and transmission costs, which are trying to model here, but also distribution system costs, which capacity expansion models don’t usually cover, as well as other things like taxes. One thing that might be interesting is to see what the energy and transmission costs are coming out to in our capacity expansion model and how it relates to that data from EIA (pretty sure they have a breakdown by cost type). Note that transmission costs include legacy transmission with cost recovery (as energy prices might). We may need to start with EIA data but then modify it based on how our model shows costs changing. It will be tricky to get rates correct, I think, though doable. FWIW, I also would love to see some representation of distribution system costs, even if simple, since we don’t have it now. IEA had a very simplified way of calculating this (a function of peak demand, I think) but one that made sense and could be implemented in a straightforward way in the EPS.
robbieorvis commented 1 year ago

Also, this document very, very thoroughly documents ReEDS, including a lot of how they handle the demand side of calculations. There is a ton of useful information in here both for designing our capacity expansion model as well as how we might handle load curves and detailed demand data. https://www.nrel.gov/docs/fy21osti/78195.pdf

jrissman commented 1 year ago

Thanks, Robbie. I mis-typed when indicating energy prices are currently based on LCOEs. They are currently based on dispatch costs, as you say.

LBNL suggested just having a cost adder for transmission ($/MW) for any plant type that gets built, not just for renewables. Mike sent a resource he recommended that provided numerical data for required transmission investment $ per MW, with a range of cost options for worst case and best case scenarios, with the better end of the range likely reflecting smarter transmission planning than the worse end of the range. I was thinking about using that source, though I haven't looked at it closely beyond the description I just typed here.

Regarding storage: Endogenous storage should be deployed only when it brings down total system cost enough to cover the costs of storage itself, which I will need to work out a good method to calculate without iteratively running the dispatch allocation in the same timestep. Unlike transmission, I was not thinking of assigning any specific quantity of storage to specific plant types, but rather storage would be its own thing that gets built, like another plant type, when economic. Maybe treating it like a plant type in the allocation would be a clever way to handle it without needing to converge on an optimal storage amount through multiple allocation passes. Not assigning storage to any specific plant type avoids the erroneous assumption that RE always requires paired, dedicated storage. The tricky bit here is that need for storage is reduced by certain things like DR, which must be factored in ahead of the decision regarding how much storage to build. We can always use the prior timestep and have storage growth lag behind the plants by one year if necessary.

Your discussion of all the different types of costs that go into final electricity prices is helpful. It is likely not feasible to calculate all components ourselves from raw plant data and add them up. So we likely will want to keep using input data for electricity prices, and partition pieces of it out to cover such things as:

We probably should just try to calculate the things we need or want to calculate in the capacity expansion model and group the rest into "other". For instance, if we decide we can't get profit or distribution system costs, the remainder would be called "profit, distribution system costs, and other."

In the event our calculated price components exceed the value the user entered for the electricity prices, we need to decide what to do - either adjust the electricity price upward to at least cover our calculated cost components, or have government step in and cover the shortfall, or something else.

I will review the document you found describing ReEDS in more detail. Yesterday, I watched the ReEDS training videos and reviewed an Excel-based dispatch model LBNL made. One thing that made me happy was seeing that the way the LBNL model calculated RE curtailment was only to curtail RE when the quantity of RE times its capacity factor in that hour exceeded total electricity demand in that hour - nothing like our old flexibility point system with a calibrated curtailment curve. I think one of the big benefits of doing hourly dispatch calculations is the ability to use a simpler method to determine curtailment within each hour and to be able to ditch the difficult-to-calibrate flexibility point system entirely (which we already have done in 3.5-wip).

robbieorvis commented 1 year ago

Thanks – all sounds good! Let me know if/when you want me to review anything or do additional research.

jrissman commented 1 year ago

I've been reading the ReEDS manual and wanted to log a few thoughts here as I go through it.

ReEDS High-Level Structure

We have the benefit of having electricity demand coming from calculations in all the other sectors of our model, which are already responsive to changes in electricity pricing (and dozens of policy levers), so effectively we have a superior "demand" module that covers all economic sectors already built in.

We don't iterate to converge on an electricity price in a given year. The effects of price changes on electricity demand are delayed by one timestep (which is necessary to avoid circularity in Vensim), so we can solve for the current year in a single pass because demand in that year is fixed based on input data, policy settings, and what happened in prior years (including how the electricity system evolved up to that point). A one-year delay is not a meaningful impediment to accuracy and is much better-suited to our need for fast runtime and interactive results rather then trying to run the entire model iteratively for each year, in hopes of converging on an electricity price and demand for each individual year. In some situations, it might be more accurate with the one-year delay anyhow, because it takes time to implement new policy, to change electricity tariffs, etc.

I'm a little surprised ReEDS doesn't do it this way, because the value of iterating to converge on a price in a single year may often be low, compared to the computational runtime benefits of an approach like our one-year delay. This is especially true if their demand module only includes residential buildings. Maybe this is one reason why NREL usually doesn't run their demand module and instead uses exogenous (fixed input) data for electricity demand.

ReEDS Supply Module Cost Minimization

ReEDS' supply module determines both what to build and what to dispatch, so it's going to be the most relevant part of ReEDS for us to learn from.

In determining what to build, ReEDS uses a linear program to optimize for least cost (in each year, in each region), accounting for the following five cost components:

We have plant capital, fixed O&M, variable O&M, fuel costs, policy costs/subsidies, and as of 3.4.3, we also have financing costs, at least a simple representation. Those all play into what plant types are built in the EPS. We will need to add transmission costs as a cost adder per MW of each plant type. Regarding storage, our plan was not to have storage costs be an adder for specific plant types, but rather, for storage to be endogenously built whenever it would lower systemwide costs enough to cover its own capital and operating costs. The last bullet form ReEDs is the most interesting one and something we never considered before: the idea of a shadow cost applied to "rapid capacity growth," presumably on a per-plant-type basis. We probably don't need to do that because instead of a linear program, we have a logit function that is good at building a mix of plant types and can be tuned using the logit exponent to be more or less sensitive to price differences between plant types, which determines how much it favors the cheapest plants relative to others. A logit exponent that reduces the sensitivity to price differences is similar to a price adder to constrain rapid capacity growth of any one plant type, and either one could be conceived of as representing a mixture of manufacturing, supply chain, and siting/permitting limitations.

ReEDS Supply Module Constraints

ReEDS' optimizes for least system cost within the following constraints:

We have already thought about load balance constraints and planning reserve constraints. We generally have found that operating reserve constraints are met when planning reserve constraints are met, but I think we do currently check every hour independently to meet the reserve margin, so I think we're checking it explicitly anyway. We do have outages factored into our capacity factors for dispatchable generation, so that should address most of the "generator operating constraints" noted above. We don't currently have subregions in the electricity sector, so transmission constraints aren't relevant right now, though we might consider adding subregions with electricity trading between them if we get a clean and fast-running approach at the national scale (likely with far fewer than 8760 hourly dispatch calculations). We do factor resource constraints into hourly renewables availability. Emissions are outputs and we don't optimize around them in the electricity sector specifically, as we're not solving a linear equation here (but emissions-related taxes are handled in fuel prices). We do have a build pass for RPS/CES before a general (unconstrained) build pass.

Timeslices and subregions

In general, what we're already doing is not that far off from what ReEDS is doing, but adapted to the format of a quick-running model that doesn't iterate to converge on a solution within single timesteps, and with just one region but many more timeslices. I think we ultimately will want to reduce our 8760 hours to a more reasonable set of timeslices. ReEDS uses 17: four timeslices per season, each covering an unequal block of hours during the day (presumably the average day in that season), and a special "summer peak" timeslice that consists of the 40 highest-demand hours of afternoons in summer. We don't have to be so restricted because we only have one region and we don't iterate within a timestep, so we could get a few hundred without seeing significant runtime impacts. I think producing hourly dispatch graphs for the peak day in summer and winter, and the average day in summer and winter, would be valuable, which implies at least 96 timeslices (4 * 24). If we wanted to do average days in fall and autumn, and/or an annual average, that could add potentially another 72 timeslices, for a total of 168, though I think we really don't need all of them. We should just focus on which hourly dispatch graphs are going to be most useful.

If we end up with around 100 timeslices but one region, we'll be running much faster than in today's 3.5-wip. We could consider adding some subregions with power flows between them, but we'd have to be very careful not to introduce too many subregions to keep runtime down.

Remember that introducing power flow between subregions might force us to iterate to converge on final inter-subregion power flows (including for transfer of power between subregions separated by an intermediary subregion), unless there is a particularly clever mathematical way to avoid the need to iterate. This problem is somewhat similar to famous (and difficult) problems in computer science and mathematics like the travelling salesman problem, and there have been developed techniques to quickly converge on solutions through graph search algorithms (or other algorithms), though whether they would run faster than iterating, particularly over a relatively small set of subregions, is not clear. Alternatively, it might be possible to iterate but cap total iterations at, say, 3, which might be accurate enough and keep runtime manageable.

Capacity retirements

ReEDS retires plants (except hydropower) when they reach a certain age, defined in exogenous input data. Hydro is never assumed to retire unless a specific retirement has been announced. I'm not sure how they square this with EIA data showing plants significantly older than these fixed ages are in fact still operating today - they don't want the model to retire all those in year 1. In any case, if we want to do age-based retirement, it is trivial for us to do so, because we now have annual vintages for all our capacity.

ReEDS has an optional setting (defaulting to on) for endogenous retirements. The EPS has long had endogenous retirement (based on economic factors), so it's critical we keep that capability. Here's how ReEDS does it:

When doing endogenous retirements, ReEDS is trading off the value provided to the system by the plant versus the costs incurred by keeping the plant online. If the value is not sufficient to recover the costs, ReEDS will choose to retire the plant. ReEDS includes a “retirement friction” parameter that allows a plant to stay online as long as it is recovering at least a portion of its fixed operating costs. For example, if this retirement friction parameter is set to 0.5, then a plant will only retire if it does not recover at least half of its fixed costs. Additionally, ReEDS includes a minimum retirement age for existing conventional plants of 20 years, meaning that a conventional plant is not allowed to be endogenously retired until it is at least 20 years old.

It sounds like the factors ReEDS is considering for retirements are independent of the optimization, so it seems to me that ReEDS does retirements first, then runs the optimization to see what should be built. This is what we have done, and in 3.5-wip, the tricky part has been to determine which plants can't cover their costs, because of the multitude of factors that go into plant cost (see the Jan. 11 post in this thread). If we want to continue this approach, we'll need a much simpler way to estimate plant revenues that doesn't rely on reverse-engineering the ALLOCATE AVAILABLE() result from electricity dispatch, and trying to add capacity costs and other missing costs.

Alternatively, I wonder if there is a way to handle retirements in the same way as new capacity, after the optimization. You first do the optimization to find what the ideal system looks like. Then you retire plants so as to get closer to the ideal system, subject to some limits on retirements (as noted above). Finally, you build new plants, again to get closer to the ideal system, but perhaps reduce the new build by the amount of retirements that were prevented by retirement constraints/friction. The benefit of this approach is that it avoids the need to calculate which plants are uneconomic and want to retire for economic reasons separately from the optimization calculation. (It also means we build capacity even in the absence of unmet electricity demand, as long as the existing system has not yet reached the optimized ideal.) But limits on the maximum rate of capacity construction (and retirement) would become super important, probably the determining factor in the transformation of the electricity sector, and that's not what we want the controlling factor to be. So we might need to stick with an approach more similar to what we have now and what we had in EPS 3.4, but with improved ways to decide what plants retire and what plants are built for economic reasons alone.

Capacity construction constraints

ReEDS has the capability to limit capacity construction of particular plant types, but they default this setting to "off" and say to use it with caution. I think we can try to do without it and let any amount of capacity of a given type be built, because our logit function should help control for too much of one type getting built, similar to ReEDS' linear program. This also helps to ensure that retirements can always be replaced with a reasonable-cost mix of new capacity, rather than running into the situation where retirements happened quite quickly and it can't build enough low-cost plant types to replace it, so it suddenly is forced to build highly uneconomic plant types for one year. So let's also follow ReEDS in this and try it without any hard capacity construction constraints.

Closing notes

robbieorvis commented 1 year ago

Thank you for this, Jeff! It is super helpful to have this one place.

A few quick responses:

  1. On capacity penalties vs logit, I don’t think the exponents get the same effect. The exponents control the sensitivity of the model to price in determining what gets built, but a lower exponent means more of all the less competitive technologies get built, which isn’t really the effect we want. Lowering the exponents might cause less solar to be built if it is the most economic, but it might result in more coal and biomass, just to pick two, being built, that aren’t really competitive at all. We’ve observed this in the past. The other case is, for example, say that gas and solar are far and away the most economic plant types, but that we want to restrict the growth rate of solar. If we just changed the exponents, both plant types would be reduced, and other plant types increase, as the model is less sensitive to price. Conversely, if had a penalty that change the price of solar, the model would build less of that and the same or more gas. That’s the effect we really want – not to have lots of other types of resources built. Adding in the penalties also opens some doors to transmission and interconnection reform policies that would be interesting to test. Finally, These all cover instances when the model builds for reliability, but not the instances where the model builds based solely on economics. In those instances, each plant type is going to be evaluated for its economics, so having the penalty becomes important. TL;DR, I think we should find a way to incorporate the penalty. Based on conversations with NRDC and others on the IPM model used by EPA, they apply penalty factors based on the growth rate relative to the prior year’s installed capacity. They actually shared those values with me at one point, and I’m happy to pass those along.
  2. I completely agree on ultimately moving to time slices the way ReEDS does it.
  3. Capacity retirements are interesting. Given that we are moving to vintaging, we can probably move to using survival curves instead of retiring at a given lifetime. I evaluated real data from EIA in the past and survival curves appeared to work extremely well for modeling age-based plant retirement. I also agree we need a mechanism to evaluated economic capacity retirement. I think the current system we have works well. I don’t know how we could replicate ReEDS system, since it relies on individual plant data and operational costs, which we don’t have and are trying to avoid. We have vintages, but if the costs across vintages are close, then you could see a whole bunch of plants retiring and it may not work well.
jrissman commented 1 year ago

Thanks, Robbie. This is helpful.

Your thoughts on item 1 make sense to me. I think you are right, and we'll have to think carefully about a good approach.

On item 3, we should avoid too much complexity, and the large sizes of power plants vs. vehicles make retirement curves less suitable for power plants because you can't retire small fractions of a power plant. In any event, we should mostly focus on getting economic-driven retirements right, because that's where the policy intervenes. How we handle purely age-related retirement is comparatively less important.

robbieorvis commented 1 year ago

After talking to the electricity team and reading the NEMS documentation, I've realized part of the issue with the economic retirements is that we are missing a big portion of the going forward costs: anticipated CAPEX investments. These are separate for variable/fixed O&M, and the can be very large. NEMS has good documentation on this, and I'm going to add them as a cost to the model and incorporate them into the retirement function. I think it will make a big difference. More here: https://www.eia.gov/outlooks/aeo/nems/documentation/electricity/pdf/EMM_2022.pdf

Here is the section describing annual CAPEX:

The going-forward costs include fuel, O&M costs, and annual capital expenditures (CAPEX), which are unit-specific and based on historical data. The average annual capital additions for existing plants are $10/kW for oil and natural gas steam plants and $28/kW for nuclear plants (in 2021 dollars). We add these costs to the estimated costs at existing plants regardless of their ages. Beyond 30 years old, the retirement decision includes an additional $39/kW capital charge for nuclear plants to reflect further investment to address the impacts of aging. Age-related cost increases are attributed to capital expenditures for major repairs or retrofits, decreases in plant performance and increases in March 2022 U.S. Energy Information Administration | Assumptions to the Annual Energy Outlook 2022: Electricity Market Module 17 maintenance costs to reduce the effects of aging. For wind plants, an additional aging cost of $4/kW is added beyond 30 years, rising to $8/kW beyond 40 years. These annual cost adders reflect cost recovery of major capital expenditures to replace major component parts to be able to continue operation. In 2018, we commissioned Sargent and Lundy (S&L) to analyze historical fossil fuel O&M costs and CAPEX and to recommend updates to the EMM. 4 The study focused particularly on whether age is a factor in the level of costs over time. S&L found that for most technologies, age is not a significant variable that influences annual costs, and in particular, capital expenditures seem to be incurred steadily over time rather than as a step increase at a certain age. Therefore, we do not model step increases in O&M costs for fossil fuel technologies. For coal plants, the report developed a regression equation for capital expenditures for coal plants based on age and whether the plant had installed a flue gas desulfurization (FGD) unit. We incorporated the following equation in NEMS to assign capital expenditures for coal plants over time: CAPEX (2017 $/KW-yr) = 16.53 + (0.126 × age in years) + (5.68 × FGD) where FGD = 1 if a plant has an FGD; zero otherwise. For the remaining fossil fuel technologies, the module assumes no aging function. Instead, both O&M and CAPEX remain constant over time. We updated the O&M and CAPEX inputs for existing fossil fuel plants using the data set analyzed by S&L, and S&L’s report describes them in more detail. We assigned costs for the EMM based on plant type and size category (three to four tiers per type), and we split plants within a size category into three cost groups to provide additional granularity for the model. We assigned plants that were not in the data sample (primarily those not reporting to the Federal Energy Regulatory Commission (FERC)) an input cost based on their sizes and the cost group that was most prevalent for their regional locations. The report found that most CAPEX spending for combined-cycle and combustion-turbine plants is associated with vendor-specified major maintenance events, generally based on factors such as the number of starts or total operating hours. S&L recommended that CAPEX for these plants be recovered as a variable cost, so we assume no separate CAPEX costs for combined-cycle or combustion-turbine plants, and we incorporate the CAPEX data into the variable O&M input cost.

robbieorvis commented 1 year ago

Update here: I have a working version of a endogenous retirement. I wanted to document all the steps here and insights for future reference.

Megan and I realized that we need endogenous energy and capacity revenues to get this working correctly, otherwise the model would not be able to capture policy effects correctly. The first step to this was ensuring that energy dispatch and prices were working correctly.

Upon evaluating the model code and marginal energy prices, it was clear that this needed some work. I double checked and updated multiple sets of input data.

  1. Updated BAU Heat Rate by Electricity Fuel (there was a calculation error for natural gas combustion turbines previously that was lowering their heat rates by about 10%). This has knock on effects of lowering market prices

  2. Updating Normalized Standard Deviation of Dispatch Costs. There is a dataset from NREL with every generator in the US that has fixed and variable operating costs. I realized that the prior methodology for estimating the normalized standard deviation costs was not working correctly, because combining two standard deviations should cause the standard deviation to increase, but the net effect is that it was causing a decrease. Instead, we now calculate unit-by-unit dispatch costs, combining the variable O&M data from ReEDS and fuel prices and heat rates from EIA's 923 to produce a single unit dispatch cost. We then take the standard deviation of this cost for all of a power plant type. The net result is high standard deviations. In aggregate this is a positive change. However, there are some power plant types where there are some data issues that still need tweaking where the standard deviations are too large. The other challenge is that the data is not necessarily normally distributed, and using a normal distribution results in a set of costs that are too low. For example, natural gas peakers never have dispatch costs below $50/MWh in the dataset I'm using, but if we just take the average and a standard deviation, then a significant fraction of the units are below that value in the model. I'm not sure there is good workaround here other than to filter out some of the unit types in the heat rate and normalized standard deviation calculations, which should narrow the range and move the costs a little bit higher. We can continue to explore this in detail.

  3. As with above, I used the ReEDS generator set to update variable and fixed O&M costs for existing generators. These are quite a bit higher than what was in the Sargent and Lundy data from EIA.

  4. After talking with Michelle about the Coal Cost Crossover report, I also realized were missing important forward costs related to capex, which are not included in fixed O&M. I calculated and added those to the model as well.

There were two breakthroughs that got the rest of the structure up and running.

One is that I realized we can endogenously calculate capacity prices because we know the difference between new/existing costs and the energy market revenues. In real capacity markets, the ideal "price" is set such that the capacity revenues + energy market revenues = the cost of new entry for a theoretical new gas CCGT or gas CT plant. We can therefore calculated the capacity price in a given year as the difference between the cost of new entry for typical gas CCGT or CT (whichever is higher) and the energy market revenues, then apply that capacity price to the whole market. When doing this in the EPS, we get very realistic capacity market prices, similar to what is observed in some of today's markets.

The other breakthrough here is that after talking to Michelle and getting the coal cost crossover dataset, I realized we were using point estimates for revenues and costs for all power plants, but in actuality the forward operating costs cover a distribution (as it turns out, it's actually quite close to a normal distribution). Energy market prices already reflect a distribution since they are determined using least cost dispatch.

Using a distribution means that even if revenues exceed costs, some fraction of plants will likely be economic and retired. Conversely, if costs exceed revenues, not 100% of plants will be retired. The resulting outputs are that we get a fraction of plants that would be retired in any given year, contingent on revenues and costs (and policy).

I also borrowed from NEMS, which assumes that only plants that are unprofitable for at least three years are retired, so there is a currently a function that takes the minimum retirement fraction of the previous three years in determining what to retire.

I didn't implement it, but ReEDS (which follows a similar approach to what we do) has a "retirement friction" parameter that allows users to specify what fraction of forward costs have to be covered by market revenues in order for a plant to retire. Given that they included this, my read is that they found endogenous retirements were too large when looking just at forward costs and revenues. While I didn't build this in yet, it is easy to add and could serve as a calibrating parameter.

The new retirement function then looks at annual total energy + capacity revenues relative to the forward operating costs, and using a new normalized standard deviation of forward operating costs, determines a Z-scores (x-mean)/standard deviation. I then used the normal CDF function in Vensim to transform the Z-score into a share of total plants covered. The last step is to introduce an one and two year delayed versions, and to take the min of the prior three years to determine the share of plants retired in a given year.

A couple of other minor things: in energy dispatch there is also a reserve requirement, which requires that some fraction of energy be available for contingency events, typically about 5% (pulled from ReEDS documentation). Some markets co-optimize energy and reserves to derive a single market price. I put a temporary 5% adder to hourly energy demand for now to reflect this.

The above gets us a reasonable set of retirements, but the total capacity numbers look strange in later years because the two capacity addition mechanisms haven't yet been modified. I can start looking at those next.

I'm just finalizing some data inputs and then will push the latest commit.

robbieorvis commented 1 year ago

Posting an update here:

I've made a ton of progress on the electricity model and it's really getting into good shape. I've tested some policies and they are more or less looking reasonable, though there are plenty of things still to be worked out (see below)

Here are some updates/next steps:

  1. I got to a point where I I needed to do iterative testing and the speed was becoming a problem, so I went ahead and moved from 8760 hours to time slices. We currently use an average day for each season plus the 5 highest days in winter and summer. We can totally change this, it's just what I started with. The model looks to be working well with very similar results to the 8760 dispatch, but now runs in about 3 seconds. If peak days are more or less similar to one another, we can just determine the number of days that qualify as "peak" and change this in the model so we apply it correctly. I currently only have 5, which is probably too low, but it's a placeholder.

  2. The capacity retirements are working quite well. There will need to be some calibration done once overall things are in better shape, which can be done through a few calibrated variables, like the share of forward costs that need to be recovered through market revenue and a boolean for whether or not electricity sources are subject to economic retirement.

  3. Economic capacity additions are working well too. However, there is still the issue of single outlier years driving too much capacity addition. For example, because of the high gas prices in 2022 and 2023, market revenues are much higher than on average, and as a result there is a lot more capacity being built than should be if we were looking over a 20 or 30 year time horizon (real capacity expansion models have "perfect foresight" so they can project revenues over a long horizon precisely for this reason). I am not sure how to handle this at the moment. One thought is to do something similar to what we do for retirements, where we take the minimum capacity added in the previous 3 years and use that value. This will cause a delay on some policies though. For example, we would expect to see an immediate effect on deployment with an investment or production tax credit, but this would introduce a several year delay. On the other hand, maybe we are okay with that tradeoff because it constrains the model from deploying too much from a single odd year. It would fail to capture anticipated future market trends, but our current approach misses that too. I'd like to see if we think there's any way at all of handling this beyond taking the minimum of the previous three years.

  4. Reliability additions are improved and working somewhat well. One current challenge is that with a single model region, we are unable to identify where regional capacity shortages are. In reality, many of the grids are overbuilt in the US and therefore should not require reliability additions, but there are some where there is not enough capacity and we would expect to see growth, for example MISO. (See. e.g. https://www.nerc.com/pa/RAPA/ra/Reliability%20Assessments%20DL/NERC_LTRA_2022.pdf). This leads to another problem which is that we chug along with no reliability additions for a while, and then all of a sudden we need tens of gigawatts of additions. This is primarily due to the inability to break down needs into regions coupled with the rather extreme growth in demand from electric vehicles and also due to the fact that the NREL data for hourly load factors has EVs charging overwhelmingly right around 5 PM, which happens to coincide with when the sun is setting and solar PV is coming offline. It is nice to see the model reproducing things we see with other models and concerns there, but also makes me wonder if it is failing to capture more annual variation that would arise with regional detail. I also had to redesign some of the reliability additions because of the fact that some technologies have a zero or negative value for net CONE, but may not be available in certain hours, which was causing odd behavior.

If we did want to move to subregions, we might be able to just stick with five or so, aligned with NERC. But we could do as many as we wanted (within reason and cognizant of runtime impacts) since we have a subregion subscript with many regions. We'd probably need to regionalize the following input data (and all the calculations): start year capacity, demand/hourly demand, capacity factors (e.g. solar will be much worse in say, New England, than Texas or Florida), and fuel prices (there is significant variation in regional fuel costs that contributes to different power system outcomes). There are almost certainly others I'm forgetting, but some variables could be constant across regions.

  1. In my rush to implement timeslices, I probably failed to correctly modify the structure for pumped hydro and batteries. We should revisit that.

  2. We still have the issue of the dispatch curves being tied to normal distributions, but the data not representing that. Jeff was going to look into constant elasticities for this. If that doesn't work, I have a few other ideas, like creating some bins or just continuing to use standard deviations with some cutoffs for the high and low values. We'll miss some on the high side, but the current issue is that prices are on average far too low because we at the low end of the distribution in most hours, which is the primary thing to fix. This will probably affect how the retirement and capacity expansion pieces play out and necessitate follow-on calibration.

  3. This is more an interesting observation than anything wrong (and confirmed by Mike), but the growth in EVs and associated charging leads to a pretty dramatic increase in the need for capacity in the 2040s and beyond. This kind of demonstrates the new model is working correctly, since this effect is observable elsewhere.

  4. We need to build out the RPS policy further for a couple of reasons. First, we need to make the improvements we made for the ZEV policy, because the RPS policy suffers from the same issue wherein a region with lower than national average clean share may pass a CES that drives adoption in that region (or state) but wouldn't if we took just took a national average.

Additionally, now that we have hourly data it is apparent that without creating a way to properly value clean electricity resources based on when they can dispatch that the model will fail to meet a sufficiently high CES. For example, a 100% CES adds enough resources to meet annual demand, but fails to actually get to 100% because we now correctly account for hourly availability of resources. (this is good - the model is working correctly!). We need to have a way to ensure that the model can actually dispatch to meet the the CES value by adding some checks by creating value for those resources or restricting the addition of resources that cannot work in those hours.

Lastly, the RPS can drive market prices down and cause retirements of resources that are RPS qualifying, like nuclear, when these would likely be kept online given their place in the policy. In practice, RPS/CES work through the creation of a credit price. I recreated something like this in the model and applied it throughout, which helped mitigate this issue.

  1. We still need to add batteries as an option to meet peak demand for reliability so the model will build them if economic

  2. We still need to possibly introduce some type of cost penalty to renewables if they are added too quickly, in line with earlier conversations we had.

  3. We may want to prevent retirement of plants that are <20 years old, in line with how ReEDS and/or NEMS work. Similarly, we may consider whether or not the model should prevent plants from retiring if they are contributing to meeting reliability AND there would be a new reliability need created by their retirement.

  4. This is for later, but, it would be nice if there were a way to set a time horizon for how long subsidies are available. For example, the US PTC and ITC are available for the first 10 years during which a project is online. So even though the subsidies expire in 2032 (more or less), a project built in 2031, would get ten years of subsidies. This can matter a lot for the economics of project as compared to how we do this now. This could just be done through input data, perhaps.

  5. We still want to add transmission build out, and possibly "spur-line" costs and distribution system costs.

That's the summary of what I've done and what remains as far as I can remember. I plan to continue working on this next week unless I hear otherwise. First, I want to clean up the code and add units. Then, I plan to take a look at the RPS structure. Please let me know if you plan to work on this in the meantime.

jrissman commented 1 year ago

Just an update here that I'm making great progress on replacing the normal distribution ALLOCATE AVAILABLE with a version using three-factor Weibull curves that closely mirror the shape of power plants' actual dispatch costs. This includes the probability distribution function (PDF) curve fit itself to get the three Weibull parameters for each plant type, then using those three parameters in the Weibull cumulative distribution function (CDF) in Vensim, as part of an optimization loop that steps through 15 optimization passes to home in on the correct dispatch quantities and marginal dispatch cost.

For example, here is the Weibull curve fit for the natural gas combined cycle plants (PDF)

WeibullPDF-NaturalGasCC

Here is the associated Weibull CDF (area under the PDF curve):

WeibullCDF-NaturalGasCC

We obtain the CDF functions by curve fitting the PDF functions and finding the value of the three Weibull parameters. We then find the value for marginal dispatch cost in Vensim such that the sum of the CDF function values (across all power plant types) equals the total amount of electricity we want to dispatch. Here's a screenshot of the relevant structure:

OptimizationStructure

This uses a small optimization loop with a new 15-element "Optimization Pass" subscript, which replaces ALLOCATE AVAILABLE. It maximizes the odds of convergence by checking for how many times the marginal price being tested has moved in the same direction and stops reducing the step size if it moves in the same direction too often. This is good for when your initial guess at the marginal price is way off.

For instance, in the average Autumn day hour 0, the real marginal price is about $72/MWh. With these optimizations, it still converges within the available 15 optimization passes with an initial guess of $35/MWh, which is quite far off. A more reasonable guess of $65/MWh converges faster and more closely.

One important thing to note is that the marginal dispatch costs we are getting now are much higher than what we had before, because they are based on Weibull curve fits of actual dispatch cost data. We previously estimated dispatch costs by adding up variable OM and fuel costs, like this:

OldDispatchCostMethod

It produced extremely low values, such as $22.6/MWh for hard coal and $19.7/MWh for natural gas combined cycle. But our real world data (like the natural gas CC dispatch cost data shown in the graph above) shows that this was really far off, and actual dispatch costs for natural gas CC range from about $52 up to $180, with a peak around $70. I think this is probably the main explanation for why our energy market revenues were so low, and why we had to jump through so many hoops to try to make up the revenue elsewhere. I assume the actual dispatch costs we are curve fitting build in various other costs that plant operators need to recover, so they are more realistic than just adding up engineering estimates of variable OM and fuel costs.

The fact that the marginal dispatch cost is going to be much higher may allow us to remove some of the complexity around adding in additional revenue (in the "Estimating Capacity Prices" section), but we'll see.

But the crucial point here is that switching from normal curves in ALLOCATE AVAILABLE to Weibull curves isn't just tweaking the shape of the cost distribution. It's dramatically changing the magnitude of the dispatch costs because the real world data we're curve-fitting is much higher than our old (and likely poor) estimate of dispatch costs that we were using in ALLOCATE AVAILABLE.

robbieorvis commented 1 year ago

This is awesome! Note you can adjust the regional availability factor is prices are too high. Also, note 2021 was a pretty high year for natural gas prices, so prices may seem especially high for that year given the data.

Sent from my iPhone

On Mar 17, 2023, at 11:00 PM, Jeff Rissman @.***> wrote:



Just an update here that I'm making great progress on replacing the normal distribution ALLOCATE AVAILABLE with a version using three-factor Weibull curves that closely mirror the shape of power plants' actual dispatch costs. This includes the probability distribution function (PDF) curve fit itself to get the three Weibull parameters for each plant type, then using those three parameters in the Weibull cumulative distribution function (CDF) in Vensim, as part of an optimization loop that steps through 15 optimization passes to home in on the correct dispatch quantities and marginal dispatch cost.

For example, here is the Weibull curve fit for the natural gas combined cycle plants (PDF)

[WeibullPDF-NaturalGasCC]https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuser-images.githubusercontent.com%2F7120106%2F226079814-30720731-0d96-47d7-91f6-97a230ac2717.png&data=05%7C01%7C%7Cbf1699c5b91846623e5308db275cf78e%7Cc7ef1c8bba6347028e745094387a97de%7C0%7C0%7C638147052484181209%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oZB6xv3kmNFRaUfAB7LoIUVeB%2FE2vOHgAqH%2BghIKyWs%3D&reserved=0

Here is the associated Weibull CDF (area under the PDF curve):

[WeibullCDF-NaturalGasCC]https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuser-images.githubusercontent.com%2F7120106%2F226080398-20fa4ffd-3984-470e-8c3c-67b391b3d376.png&data=05%7C01%7C%7Cbf1699c5b91846623e5308db275cf78e%7Cc7ef1c8bba6347028e745094387a97de%7C0%7C0%7C638147052484181209%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=WaX8NrJGYTSB93jiUxSoDQb1QaPJ3qUFqp7oR4TvyBI%3D&reserved=0

We obtain the CDF functions by curve fitting the PDF functions and finding the value of the three Weibull parameters. We then find the value for marginal dispatch cost in Vensim such that the sum of the CDF function values (across all power plant types) equals the total amount of electricity we want to dispatch. Here's a screenshot of the relevant structure:

[OptimizationStructure]https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuser-images.githubusercontent.com%2F7120106%2F226080503-a3389011-0e5c-4aa9-af29-6ada656968eb.png&data=05%7C01%7C%7Cbf1699c5b91846623e5308db275cf78e%7Cc7ef1c8bba6347028e745094387a97de%7C0%7C0%7C638147052484181209%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=FaWAeI89L5eOlwGvQYX58Pva6Yi3IIoFQTU6pIDdlHo%3D&reserved=0

This uses a small optimization loop with a new 15-element "Optimization Pass" subscript, which replaces ALLOCATE AVAILABLE. It maximizes the odds of convergence by checking for how many times the marginal price being tested has moved in the same direction and stops reducing the step size if it moves in the same direction too often. This is good for when your initial guess at the marginal price is way off.

For instance, in the average Autumn day hour 0, the real marginal price is about $72/MWh. With these optimizations, it still converges within the available 15 optimization passes with an initial guess of $35/MWh, which is quite far off. A more reasonable guess of $65/MWh converges faster and more closely.

One important thing to note is that the marginal dispatch costs we are getting now are much higher than what we had before, because they are based on Weibull curve fits of actual dispatch cost data. We previously estimated dispatch costs by adding up variable OM and fuel costs, like this:

[OldDispatchCostMethod]https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuser-images.githubusercontent.com%2F7120106%2F226080708-3db5590b-3418-4555-ae66-2528eb46ad90.png&data=05%7C01%7C%7Cbf1699c5b91846623e5308db275cf78e%7Cc7ef1c8bba6347028e745094387a97de%7C0%7C0%7C638147052484181209%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=PKYU5kLL2XODGliq2fFk%2FoquPaBBbSMgFXDnp2q6Zvc%3D&reserved=0

It produced extremely low values, such as $22.6/MWh for hard coal and $19.7/MWh for natural gas combined cycle. But our real world data (like the natural gas CC dispatch cost data shown in the graph above) shows that this was really far off, and actual dispatch costs for natural gas CC range from about $52 up to $180, with a peak around $70. I think this is probably the main explanation for why our energy market revenues were so low, and why we had to jump through so many hoops to try to make up the revenue elsewhere. I assume the actual dispatch costs we are curve fitting build in various other costs that plant operators need to recover, so they are more realistic than just adding up engineering estimates of variable OM and fuel costs.

The fact that the marginal dispatch cost is going to be much higher may allow us to remove some of the complexity around adding in additional revenue (in the "Estimating Capacity Prices" section), but we'll see.

But the crucial point here is that switching from normal curves in ALLOCATE AVAILABLE to Weibull curves isn't just tweaking the shape of the cost distribution. It's dramatically changing the magnitude of the dispatch costs because the real world data we're curve-fitting is much higher than our old (and likely poor) estimate of dispatch costs that we were using in ALLOCATE AVAILABLE.

— Reply to this email directly, view it on GitHubhttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FEnergyInnovation%2Feps-us%2Fissues%2F232%23issuecomment-1474644474&data=05%7C01%7C%7Cbf1699c5b91846623e5308db275cf78e%7Cc7ef1c8bba6347028e745094387a97de%7C0%7C0%7C638147052484181209%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=uJg5cflBCg0fy1TsFH18Z%2Bg85vFlXlPjsvtuU23Wxrw%3D&reserved=0, or unsubscribehttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAK5N6SPUDYRLNMKNQR7PE73W4UQNXANCNFSM5W3M6AJQ&data=05%7C01%7C%7Cbf1699c5b91846623e5308db275cf78e%7Cc7ef1c8bba6347028e745094387a97de%7C0%7C0%7C638147052484181209%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=XJoDKF0iIoIF9rH9OQ9YJdej0ql7qurtn4LXfxMXPfE%3D&reserved=0. You are receiving this because you were mentioned.Message ID: @.***>

jrissman commented 1 year ago

I've done more clean-up and am now pulling input data from a new EDWP Electricity Dispatch Weibull Parameters file. One thing to note is that I only have Weibull curves for four plant types in EDWP because the others didn't have any data on the "Combined Sheet" in the "Filtered Total Dispatch Cost ($2012/MWh)" column. (The first half-dozen tabs in that Excel file come from the old NSDoDC spreadsheet, which we won't need anymore once we finish the Weibull curves.) I'm using the natural gas combined cycle Weibull parameters as a stand-in for the missing plant types' Weibull parameters. I know we had normalized standard deviation data for all plant types, so it seems like we might have the data we need to make Weibull curves for all of them, or at least a good guess. If you have time, @mkmahajan, it would be very helpful if you could take a look at the new elec/EDWP spreadsheet this coming week and see if it's possible for you to add any new Weibull curve fits for the plant types that don't already have one.

I've tested the new Vensim code and can confirm that it reliably converges and it allocates the correct total amount of electricity to different plant types, but I haven't assessed the realism of the way it allocates electricity to plant types because we only have real Weibull parameters for four of the plant types. I also haven't slotted the new electricity dispatch quantities nor the discovered marginal cost into the calculation flow, since I wanted to get the real Weibull data in there and look at the realism of the dispatch choices first. You need not wait for me to do this, if you finish the EDWP variable and think the outputs are worth using instead of the ones from ALLOCATE AVAILABLE.

Here are some graphs of the Weibull curves I made. Note that the X and Y axes have different scales in the graphs below.

Hard Coal WeibullPDF-HardCoal

NG Steam Turbine WeibullPDF-NGST

NG Combined Cycle WeibullPDF-NGCC

NG Peaker WeibullPDF-NGPeaker

robbieorvis commented 1 year ago

Love to see all this progress! I want to flag a few things for you as you continue with development:

  1. I'm not sure if you are just using placeholder data while you build this out, but we will eventually need at least one curve parameter to be dynamic and calculated internally in the EPS. This is necessary because the dispatch costs will change as policies and other things are added in the model, e.g. fuel prices and carbon taxes. Since the mean of a Weibull distribution is a function of the three parameters, it seems like as long as two of those are held constant, you can calculate the third based on how the mean of the dispatch costs change.
  2. I mentioned this in the earlier reply, but the dispatch costs in the data are a lot higher that what is in the EPS, even looking at averages, in part because of the higher recent gas prices. It will be interesting to see what the costs look like when the model calculates the curves dynamically using the variable O&M and fuel data (we'll continue to need these, though probably won't need normalized standard deviations anymore, as you noted.
  3. For @mkmahajan: renewable resources shouldn't have any fuel costs, so the data should just reflect the variation in variable O&M from the ReEDS generator set. There are a few technologies that have fuel costs but aren't included in the 923 data as far as I know, like nuclear, biomass, and municipal solid waste. We may need to find other data sources or just assume a constant fuel price for those (which can feed into heat rates to get differences in the fuel portion of dispatch costs).
  4. For the starting guess of dispatch costs, that can probably also be calculated endogenously, just using the dispatch costs as calculated from one of the power plant types (maybe we use a boolean in the input data to flag which should be used). It should always be one of the dispatchable fossil fuel power plant types.

Also, just curious what kind of runtime impact you are seeing?

jrissman commented 1 year ago

Weibull parameters A and B (for scale and shape) alter the fatness of the Weibull curve and the relative thickness of the tail versus the main bump in different ways. Also, only very specific negative integer values are valid for parameter B. Changes to the average dispatch cost from things like differences in fuel prices likely shouldn't have large effects on these properties of the shape.

Weibull parameter C (location) shifts the curve left or right on the price axis. So changing parameter C in response to changes in fuel costs or other changes in average dispatch cost seems to me to be the thing we should do, rather than changing parameters A and B. (Also, if you change parameter A or B, you typically have to change the other two parameters as well to keep the shape looking sensible, and I cannot imagine how to change all three together programmatically. Parameter C is more independent and can be changed without having to recalibrate A and B.)

I suppose we might be able to calculate parameter C entirely endogenously. Or at the least, we can alter the input data's parameter C according to year-over-year changes in average dispatch cost.

Note that we already use endogenous data for the volume under the curve, which I'm calling parameter D in the spreadsheet. So we do have that endogenous linkage already. But you are right that we'll need to add an endogenous linkage for average dispatch cost as well.

jrissman commented 1 year ago

Oh, regarding runtime impact, I'm not seeinguch, because I kept it down to 20 optimization passes, and only about four variables. Most of the runtime impact is from the bulk of other stuff we are doing in the electricity sector. We might ultimately not need 10 peak days. We can get it working with 10, then test with fewer peak days, and see if the model still builds the same stuff.

Also, we will look through and try to optimize everything in the sector once we essentially get it working. It might be possible to squeeze some runtime out that way too.

jrissman commented 1 year ago

Also, we haven't yet deleted the code that the Weibull will replace, so we currently get runtime impact from both, but we won't in the end.

robbieorvis commented 1 year ago

That sounds perfect in the parameter to shift, since it should more or less just shift the distribution (we can make that underlying assumption for simplicity). We’ll just have to figure out how take average costs and translate to the c parameter… there might be an equation for that if we know an and b parameters. Anyway, sounds like we have lots of options.

Sent from my iPhone

On Mar 18, 2023, at 3:43 PM, Jeff Rissman @.***> wrote:



Also, we haven't yet deleted the code that the Weibull will replace, so we currently get runtime impact from both, but we won't in the end.

— Reply to this email directly, view it on GitHubhttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FEnergyInnovation%2Feps-us%2Fissues%2F232%23issuecomment-1474969318&data=05%7C01%7C%7Cb38bf11d61d84d8b464208db27e8fdbe%7Cc7ef1c8bba6347028e745094387a97de%7C0%7C0%7C638147653854120499%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=BFctTLP4TNiJW6PmwwVFA05MVPuiUNipfgxO3u6UMu8%3D&reserved=0, or unsubscribehttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAK5N6SJCZRWK6IQXQ7AOAWLW4YF4PANCNFSM5W3M6AJQ&data=05%7C01%7C%7Cb38bf11d61d84d8b464208db27e8fdbe%7Cc7ef1c8bba6347028e745094387a97de%7C0%7C0%7C638147653854120499%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=TKdsoVph5oNLAFwhHuTVaRRpFCD3vvv7nFcPUslreV0%3D&reserved=0. You are receiving this because you were mentioned.Message ID: @.***>

jrissman commented 1 year ago

Megan and I made a very large amount of progress on the electricity sector today - combined with the Weibull curves from last week, we're really moving forward and making breakthroughs that will get this thing done. There probably haven't been two more productive weeks in a long time. Here are some of the highlights:


Some "to do" items for next week include:

robbieorvis commented 1 year ago

This is great to hear!

I’ll check Monday and will look for a few things in particular:

Separately, I have an idea now for how to handle the CES issue using your new optimization structure. We would create an extra credit for dispatchable clean resources and use it in the CES allocation. We can do an optimization to make sure we are able to meet the CES on an annual basis and optimize the credit value to get the right mix of resources built.

jrissman commented 1 year ago

You can tune the aggressiveness of forecast reliability construction using the coefficient. I selected 0.5 because this value narrowly prevents there from being any positive need for dispatchable resources built for reliability in any year (i.e. all reliability needs are covered just barely by stuff built using the forecast system). Thus, you should not find that it is overbuilding in the BAU case. It is okay if policies increase or decrease the need a bit, and the model has to build some for reliability in the current year, or it builds slightly more resources than needed. Real world utilities don't have perfect future foresight, even if that is a simplification some other capacity expansion models make.

It has responsiveness to policy-driven changes in the electricity system built in because any change that increases relative peak need will increase the amount of forecast build, and vice versa for decreases. It might be fine as is. But if you test it and think it isn't responsive enough to policy-driven changes in the electricity system, the first thing I would try is to have the model calculate the aggressiveness coefficient rather than keeping it set to 0.5. It should fall between 0 (no forecast building at all) and 1 (it tries to fully cover every increase in need in the current year, no matter how far we are from needing to dispatch that plant, an extraordinarily aggressive setting).

Since we only build dispatchable plants for reliability now, we only check the worst hour (after accounting for availability of variable resources). If that hour is covered, all hours are covered and don't need to be checked individually. We're not doing anything like what we did before that was causing overbuilding.

We haven't yet made BECF responsive to changes in the electricity system, so it is too soon to review that. I'm just highlighting that we must not have wild swings in this variable, so when we (or you) do make the electricity system affect BECF, that interaction should be based on properties of the system that change smoothly over time, such as the percent composition of the system by plant type.

We didn't look at the RPS/CES structure yet. I think the piece you indicated is missing is the need for it to take hourly availability of variable resources into account, not just their bulk annual generation potential. The credit price thing sounds like something different. I'll be happy to see what you make regarding the CES.

robbieorvis commented 1 year ago

Hi both,

I spent this morning reviewing the code updates.

It looks EXCELLENT and seems to be working very, very well, with a much-improved runtime impact. Kudos to you both.

I do have a few notes/follow-ups, which I include below. I am planning to spend the rest of the day working on the CES. To your earlier question, Jeff, the CES comments I mentioned relate to making sure there is sufficient dispatchable clean electricity to satisfy the CES on an annual basis accounting for hourly demand and supply constraints, which I plan to do by creating a new "credit" in the model and using your optimization to find the value. If it's not yet clear what I'm attempting, it soon will be.

Feedback on the current structure in order of importance:

  1. There is an issue with Weibull curve dispatch when we get to very (~100%) high shares of clean electricity. At a certain point, the system has sufficient $0 dispatch cost resources to meet demand (actually, more than enough). Because the Weibull curves for those resources aren't curves, but rather single lines, the optimization cannot find a price where supply equals demand because reducing the price does nothing, so it just settles on an extremely negative price. We'll need to address this, which will also allow us to correctly calculate curtailment. Here are a few ideas:
    • We can proportionally decrease everything when supply>demand. We'd need to add this to the optimization.
    • We can modify the Weibull parameters so that the curves can go negative for certain resources to allow for the model to identify the right cost that gets to curtailment (this is how it works in the real world).
    • We can do a hybrid, where some resources cannot be curtailed but others can.
    • Whatever we choose, way, the "cleanest" way to do to this is to create Weibull parameters that allow for a distribution of dispatch costs for clean electricity technologies.
  2. We're still getting some spikey behavior in the cost effectiveness additions. For example, try adding a $150 carbon price on policy schedule 1 and take a look at the cost-effectiveness additions for natural gas combined cycle units. I believe the main culprit here is a misalignment between the energy market revenues (one year delay) and the cost per unit new elec output (current year). To address this, we could build a new "hypothetical" optimization loop looking at estimated energy market revenue for a plant built this year by plant type. You could just replicate the current structure for the actual market dispatch and use it as an input to the cost-effectiveness calculations, using last year capacity but current year fuel prices, incentives, etc....). This would help get around the time delay/misalignment.
  3. Natural gas peakers are still dispatching considerably less than their real-world values. However, they appear to be dispatching similarly to how ReEDS dispatches them, though I haven't dug into the details on this yet. One issue here is that the days per electricity timeslice needs to be updated. I realized right now I have one day each for each of the peak days in summer and winter, but in reality there are many more days in summer and winter that match the peak days. We might consider getting rid of 4 of each, and just having a single peak summer and winter day, and then having more of those days over the year in "days per electricity timeslice." I played around with this a bit, and it had a material impact on the natural gas peaker capacity factors, though we are still short of the real-world data.
  4. We may want to revisit whether or not mandated capacity construction should be additive or only kick in if the reliability, RPS, and cost-effective additions fall short. I would suggest this change (I think this was how you originally had it structure and I may have overwritten it). This avoids overbuilding.
  5. Should we add a smoothing time for capacity additions due to profitability or avoid it because we are already taking a minimum of the previous three years? My instinct is to add the smoothing time (e.g. three years) back in, since it is likely that plants would be built over several years, even if/when they are determined to be profitable.
  6. I would suggest renaming "Boolean is this Plant Type a Peaker" to "Boolean Can this Plant be Built for Reliability" to better convey how we are using at for international adaptation. Longer conversation here related to this variable in other countries, but it's been a bit tricky with it named this way.
  7. Just a thought here: it might now be possible to calculate the capacity price by figuring out a price needed in the cost-effectiveness optimization that gets the right level of resources built for reliability. We don't NEED to do this, but it would be a way to endogenously calculate capacity prices. However, one issue is that if capacity factors for peakers are incorrect, this greatly inflates the capacity price. For example, natural gas peaker dispatch drops to <1% after a few years, compared to maybe 8% in reality. That's roughly a 10x difference in energy market revenue in the model, and subsequently roughly a 10x capacity market price required. FWIW, I tested this out, and it currently produces capacity prices roughly 10x real values today, so correcting the capacity factor dispatch (not sure on how to do that yet though) and then calculating endogenously might work quite well.
  8. On the issue of peaker dispatch, this could be due to all the demand altering technology stuff... not sure if we have revisited that yet.
  9. More of a nice to have, but it might be cool to go back to using more granular hourly equipment load factors that is tied to specific equipment rather than sector-wide values. This is more important now that we have an hourly dispatch model because the different sources of demand have very different load profiles. I'd be happy to take this on and/or provide the dataset for it, since I have it open at the moment.
  10. There are a few outstanding data issues, like updating RAF Regional Availability Factor and addressing the Hourly Electricity Capacity Factors for a few of the plant types (like geothermal and solar thermal). These are in the noise for now. Likewise, we need to investigate why the model wants to build geothermal and biomass. Some data issue there, I think.

Overall though, the new structure is working so, so well. Congrats on the great progress from last week.

jrissman commented 1 year ago

Your idea to have fewer individually-profiled peak days, and to weight the remaining peak days more heavily by counting them multiple times per season, sounds like a really clever way to improve runtime and also improve accuracy. The runtime is strongly affected by the number of individually profiled days (currently 14). Weighting days more heavily by counting them multiple times per season has no runtime impact.

We could think of having a small number of "model" days that get counted for a significant fraction of each season. For example, summer low, summer mid, and summer high could be three profiled days, and each is assigned a share of all the days in summer. (If most days are roughly average except for, say, 20 peak days per season, then we could have average summer and peak summer be the only two profiled summer days, rather than three.). If spring and fall are fine with a single average because they will never drive peak system need, then we might have as few as 6 profiled days (4 averages and 2 peaks), or maybe 8 (2 lows, 4 averages, and 2 peaks).

robbieorvis commented 1 year ago

Regarding geothermal: one issue I've now identified is that we are using geographically constrained system (just the traditional flash/binary) instead of enhanced geothermal or deep enhanced geothermal. I recommend we update CCaMC for new geothermal to use enhanced, at a minimum. I am asking a few folks about which type of enhanced plant we should use, but more generally, I think this will dramatically (and correctly) increase capital costs for new plants.

mkmahajan commented 1 year ago

Ahead of tomorrow's dev session, I took a look at the retirement code and made a few changes. Unfortunately, I didn't realize Robbie was still online this afternoon, so this wasn't built on top of his most recent commit. We'd therefore have to manually bring these changes on top of his newest CES WIP tomorrow. I've pushed my edits to a separate #232_retirments branch to keep it separate from Robbie's work, since I made a few input data changes that won't be compatible with his .mdl file. We should still look at this together tomorrow and decide whether we want to commit to this approach, but I think these changes would help us start from a cleaner place.

I had to stop here so didn't make it any further on this. However, I did see that based on the changes so far, we now see ~100 GW of cumulative coal retirements. This is fairly close to the 120 GW of coal retirements that AEO is projecting, and a lot better than before when we were getting 0 GW of economic coal retirements.

jrissman commented 1 year ago

I finished a new RPS allocation implementation that pays attention to hourly availability of resources and hourly demand. This turned out to be fantastically difficult, definitely up there with the most challenging components we've ever built in the EPS. I tried three approaches, and only the third approach worked at all.

NewRPSAlloc

The "for" loop uses a technique similar to the one we use for optimization to loop through the Hour subscript (with subranges "preceeding hour" and "current hour"), but here we're using it to step through hours and process them sequentially, relying on what we did in the previous hour to help calculate the current hour, rather than to converge on a target value.

Since I worked on this late into the night just to get it done, I haven't tested it with any non-BAU RPS settings. It likely is not perfect yet. But I think it is an improvement over other options we have considered. It runs fast, it is comparatively easier to comprehend what it is doing, and I think it should be robust to differences in inputs and regions since it has no cost adders or calibrated coefficients.

Given that I spent all my time working on the RPS allocation mechanism, I didn't get a chance to touch the retirements code. If Megan, Robbie, or anyone will be working on the model this week, please work from the latest commit I left in branch #232.

robbieorvis commented 1 year ago

Thanks, Jeff! Well, I guess I'm glad it wasn't just me thinking this was an incredibly thing to build!

Look forward to seeing the new structure after a bit more testing and refinement. One question: does it matter which hour you start the optimization in? I guess we'd want to be sure that choosing to start each optimization in hour 1 instead of hour 12, for example (and looping through 24 hours either way), doesn't result in significant differences in costs. They don't need to be identical so long as costs are close. I thought about something like this for electricity dispatch, mainly whether or not to impose a ramping constraint (in the real world and in other capacity expansion models, there are limits on how quickly plants can ramp up output). What I quickly realized was that the mix of plants you get with a ramping constraints depends to a large degree on what hour you start the optimization in if you are going chronologically. I didn't bother trying to implement this given the technical requirements and also the fact that we just have so much other power sector stuff to work on, but the thought came up reading about the new structure you built.

I also wanted to follow-up on a point about retirements and why I don't think using capacity factors alone will work:

  1. One of the most uneconomic plant types in the US right now are nuclear plants, which have needed state tax credits + funding in IIJA + tax credits in IRA to stay profitable. However, these plants, even when unprofitable, run at a constant capacity factor. They are primarily unprofitable because of their high fixed O&M and ongoing capital investment requirements coupled with ever shrinking energy market prices. Since they have a very low dispatch cost and are also relatively inflexible, nuclear basically runs at a 95% capacity factor, all the time. Any plant type with this characteristic (very low dispatch costs, high capacity factors, and high fixed O&M/capex) would not retire under a structure that just looks at capacity factors. Conversely, the time you WOULD see these plants retire is when there are a lot of other clean energy resources online, such that there is curtailment (and even then, nuclear is unlikely to be curtailed, because it basically can't be). That's probably the time when nuclear is actually most valuable to the system, so we might get the opposite signal from what is desired.
  2. As the share of clean energy grows, we expect to see different utilization of plants, who may still retain value but run differently. Gas CCGTs are one example, which should run less in the US in the future, but which can provide a lot of value during times of ramping/flexibility need. This is true of coal plants in China. All of our partners have consistently pointed out that they assume the coal fleet will run less but more flexibly in China (their coal plants are much newer so perhaps it's more feasible than in the U.S.). That's not to say that none of the plants should retire if they run less, but I don't think it's a strict relationship.
  3. For the CES, we are very likely to get a growing mix of wind and solar, which will lead to curtailment of those resources as the grid is saturated. This would trigger retirements if we just looked at capacity factor.

Looking at profitability instead of capacity factors helps get around all these factors. It will show that nuclear plants, even running at high capacity factors, are uneconomic, and lead to some retirements. To the extent coal and gas plants can remain profitable as their output shrinks, it will limit retirements (though only to a certain extent). For the CES, if there is a credit, factoring that into the retirement decision could prevent retirements of clean resources. It also would help keep other resources that are clean and might dispatch less (like hydro) from retiring just based on capacity factors.

If the retirement function accounts for energy market revenue, it is to some extent already accounting for changes in capacity factor, because the excess energy market revenue will shrink as plants run less.

jrissman commented 1 year ago

I guess we'd want to be sure that choosing to start each optimization in hour 1 instead of hour 12, for example (and looping through 24 hours either way), doesn't result in significant differences in costs. They don't need to be identical so long as costs are close.

I did think about this, and I don't think we need to care that it be identical or close in cost depending on the hour it starts, as long as we are starting an hour that results in the lowest-cost solution (or close to lowest-cost). The goal of the allocation is to pick out a low-cost solution from the infinite number of solutions that satisfy the RPS; we don't need solutions we don't pick to also be low-cost. We're starting in hour 0, which I think is likely to be an ideal starting hour. It means no sunlight is available, so it will build as much non-sun-requiring, RPS-qualifying power plant types as it needs for that hour. In practice, this means it builds onshore wind (at least in early model years). Then, during sunlight hours, it will be able to build less solar because some of the wind built for use at night still runs in the day.

If we'd started in a sunlight hour, solar is the lowest-cost solution, so it would build enough solar to meet all the demand in that hour, then in the non-sunlight hours, it would have to build just as much wind as if it had started in hour 0, because the solar doesn't contribute at night. So it would be a higher-cost solution because it would be over-building solar.

Hour 0 is actually even better than that, because it is early in the night, so we get a few no-sunlight hours in a row (in case one is less windy than the others), so we get to be pretty confident we have enough resources to handle no-sunlight hours before we have to deal with sunlight hours.

If we want to test what the cost is when it starts in different hours, we can do that, but instead of looking for whether the costs are the same in every hour, we should be looking for whether there is a starting hour that results in substantially lower costs than starting in hour 0.

I'll need to address the retirements topic later in a different post, as I simply don't have time right now. Briefly, I think there are challenges to both approaches, and you have listed some of the challenges to a capacity factor-based approach, but we should also consider the challenges to the cost-based approach. We may ultimately end up with some sort of hybrid model. I don't know. I am sorry that I simply don't have time yet to look into this yet, as I spent all my time on Friday on the RPS (even working into the night), and there are pressing things I am being asked to do for the Industry program right now.

Note that Megan's fixes to the cost-based approach, if we want to keep them, will need to be manually re-created (ported) into the latest commit in the 232 branch. This likely would not take long. I'd recommend Megan do this before you work on the model further, Robbie, so the fixes don't end up getting orphaned and need to be ported again. They also will be useful in evaluating the performance of the retirements code.