ProjectTorreyPines / FUSE.jl

FUsion Synthesis Engine
https://fuse.help/
Apache License 2.0
11 stars 0 forks source link

Negative triangularity reactor study #552

Open orso82 opened 5 months ago

orso82 commented 5 months ago

Requirements

orso82 commented 5 months ago

2023 DIII-D PAC NT part.pdf

orso82 commented 5 months ago

Discussing with @kathreen8 we agreed that to compare PT and NT FPP design we must add assessment of risk to FUSE. Right now we use betan and q95 as a proxy for risk, but that's very limited. Assessing risk will allow us to compare designs that utilize wildly different concepts and technologies.

Once done we'll be able to run 3-objectives optimization using:

  1. total daily energy output (to compare pulsed Vs fully non-inductive designs)
  2. capital cost (like we do already)
  3. risk

The starting point for risk should be the work presented by @daveweisberg at 2022 APS. I am not sure if there has been more progress on risk assessment since. weisberg_poster_v2 (2).pdf

@adrianaghiozzi you're the best person to take on the development of a ActorRisk in FUSE. As we already discussed, this would dovetail perfectly with your work on costing and is a great topic for APS.

Discussing with @TimSlendebroek about how to express loss severity, we think it should be done $, just like an insurance company would ;)

For risk we could use a similar data structure organization as we do for costing. Something quite coarse and generic to start, like this?

IMASDD.risk
├─ plasma
│  ├─ risk # as expression (sum over plasma risks)
│  └─ loss[:]
│     ├─ name
│     ├─ method # loss method
│     ├─ severity # loss severity in $M
│     ├─ probability # loss probability [0-1]
│     └─ risk # as expression (severity * probability)
├─ technology
│  ├─ risk # as expression (sum over technology risks)
│  └─ loss[:]
│     ├─ name
│     ├─ method # loss method
│     ├─ severity # loss severity in $M
│     ├─ probability # loss probability [0-1]
│     └─ risk # as expression (severity * probability)
└─ total_risk # as expression (sum of plasma + technology risks)
bclyons12 commented 5 months ago

I feel like risk is going to be very difficult to quantify in an automated, realistic way. We're going to write some risk function that takes a hundred disparate things and synthesize it down to a single number. Everything is going to go into that risk function, which will necessarily be subjective. How do you equate the risk of a disruption to the risk that HTS won't get cheaper? Even if you assign those numbers, do you add them linearly? Quadratically? Perhaps there's literature on doing this, and I know @daveweisberg has knowledge, but it seems challenging.

To me, a better strategy might be to perform separate optimization runs with certain "discrete" choices fixed. For example, you could imagine doing four runs with {PT, NT} ⨂ {LTS, HTS}. Even looking at the Pareto fronts of those various choices would be interesting, but you could take individual solutions on the Pareto front at fixed capital cost or fixed betan or whatever and compare those from a risk standpoint more holistically.

kathreen8 commented 5 months ago

If it is easier we can focus on certain aspects. Importantly we need to quantify the impact of ELMs or not, heatflux, damage to materials, etc.

orso82 commented 5 months ago

@bclyons12 you can choose to make the subjective call about one risk Vs another either before you run the sims, or after you run the sims... but at some point you must do it! Of course we could make each individual risk-related parameter an objective in our optimization (that's what we do now). As you add more objectives you'll quickly get to a point where most simulations are on the multi-dimensional pareto front. Besides the explosion in computational complexity (which grows exponentially with the number of objectives), this will make interpretation difficult and subjective. The same is true if you decide to run a combinatorial number of simulations as you were proposing.

I agree assigning risks is difficult, but it must be done. Like we do for costing... it's imperfect I am sure, but it allows us to compare different designs on the same footing, and encode the best costing guess for everyone to use. Imagine not having a costing model, and suggesting to make {PT, NT} ⨂ {LTS, HTS} runs and let people use their gut feeling for what is more cost effective... that wouldn't fly.

A costing model is what keep designs to becoming infinitely big, and a risk metric is what keeps the designs grounded in reality.

To your question:

Loss severity should include financial loss due to physical damage, down-time, safety incidents, or reputational damage, ... all expressed in $. Probability of loss can be based on historical data, expert opinions, or predictive models. In general the loss should reflect the worst-case scenario to ensure we capture the full potential risk.

When we have this in place, we can always allow using a weighted sum of risks to allow users to give more importance to something Vs something else

TimSlendebroek commented 5 months ago

If it is easier we can focus on certain aspects. Importantly we need to quantify the impact of ELMs or not, heatflux, damage to materials, etc.

Yeah I am also in favor on focussing on those aspects as well @kathreen8 otherwise NT will only be relevant for the low betaN parameter space of the minimize cost x minimize betaN x maximize q95 space

example of this in latest multi objective optimization:: image

bclyons12 commented 5 months ago

@TimSlendebroek I think the idea is that with a properly defined risk metric, you might get lower risk NT solutions at higher betaN, presumably at higher cost that the PT solution. As @orso82 was saying yesterday, you can "buy safety"

bclyons12 commented 5 months ago

One interesting thing about doing the runs separately is that you could compare the Pareto fronts, like this mockup Screenshot 2024-05-01 at 2 00 41 PM

TimSlendebroek commented 5 months ago

@TimSlendebroek I think the idea is that with a properly defined risk metric, you might get lower risk NT solutions at higher betaN, presumably at higher cost that the PT solution. As @orso82 was saying yesterday, you can "buy safety"

Lower risk in some ways but higher risk in other ways perhaps as you are pretty much stuck with HTS only for NT so there are tradeoffs

TimSlendebroek commented 5 months ago

I do like running two seperate runs for NT and PT like you suggest with that plot and compare that curve!

orso82 commented 5 months ago

@bclyons12 you have a point about being able to compare pareto fronts by running multiple separate optimizations

bclyons12 commented 5 months ago

@orso82 Presumably you could the collect all the runs together to get a total Pareto front. You can still define a risk function and as long as it's the same for each run we don't lose any information.

Minimizing risk is a great idea, no doubt. The details of how you define it are going to determine everything about the result, so we just need to be aware of that. It would be great if we can add error bars to the "cost" of various risks.

orso82 commented 4 months ago

Prospects of negative triangularity tokamak for advanced steady-state confinement of fusion plasmas in MHD stability consideration

https://www.sciencedirect.com/science/article/pii/S2772828524000165