MESAHub / mesa

Modules for Experiments in Stellar Astrophysics
http://docs.mesastar.org
GNU Lesser General Public License v2.1
138 stars 38 forks source link

State is unclear in the `star` pointer. #328

Open adamjermyn opened 2 years ago

adamjermyn commented 2 years ago

Intro

I've been trying to understand what makes state so tricky in MESA/star. This is my attempt at expressing what's wrong, and what we might do about it.

Everything here has some caveats, the biggest of which is a 'so far as I know' that should be attached to every statement about MESA.

Current State

There are at least six different kinds of state that live in star_data:

  1. Metadata. Flags and configuration, mostly defined in star_data_step_input.inc.
  2. The 'ground truth' structure variables. E.g. xh (structure), xa (composition), mlt_vc (for TDC).
  3. Starting values. Used for evaluating time derivatives.
  4. Copies of structure variables. e.g. s%lnR is a copy of s%xh(i_lnR). Presumably this is for convenience and code clarity. Note that these are copies and not pointers.
  5. Intermediate results. For instance results of eos calls or intermediate calculations like csound and z53bar.
  6. Outputs that go in history or profiles or pgstar windows or which are used for debugging.

There are also at least three different problems we routinely hit with state:

  1. Sometimes an output becomes an intermediate result when a piece of code is written that relies on it. These may need updating at different points in time, resulting in inconsistencies like #66.
  2. Different kinds of state need to be updated at different times when star performs iterations, starts a step, finishes a step, retries, model writing/reading, etc. This was a major source of bugs in the development of TDC, and even determining when a particular piece of state gets updated is challenging/time-consuming.
  3. It is not clear which intermediate results depend on which other intermediate results, and what those dependencies look like. This makes it easy for things to get out of sync. This came up in developing eps_mdot, where at one point s%R and s%lnR got out of sync.

Proposed Solution

I am not sure of the best solution here, but the best I've come up with is to the data inside star_data into six sub-types:

  1. meta
  2. struct
  3. start
  4. ease
  5. inter
  6. output

These correspond to the different kinds of state above.

A few examples:

These sub-types help organize the logic, at the cost of being more verbose. But for instance you could pass them directly to subroutines, so a routine that calculates outputs could be passes s%output -> o and then you write to o%center_ye.

We can also attach rules to some of the sub-types:

  1. meta only gets touched by run_star_extras and initialization routines. Flags don't get changed by the rest of star.
  2. struct only gets modified by the solver. In some cases this just means copying an intermediate variable (e.g. there can be an mlt_vc variable that lives in inter, but the 'ground truth' will always be the version the solver writes into struct). This ensures that you know for sure that struct data was last updated at the end of the most recent iteration.
  3. start variables only get set at the start of a step or by initialization routines (including model reading).
  4. ease variables would be written once per iteration, always at the end of a solver call. This ensures that they remain just convenient places to put copies of structure variables and/or one-off derived quantities like eos outputs.
  5. inter variables would be a sort of catch-all for state we can't easily put in the other buckets. The goal should be for this to be a small category.
  6. output variables would be written once per step. They would never be read except by the profile/history/pgstar/model writing routines. We could enforce this with a derived type, or else check for it with a linter.

I think this would make the state much more manageable, and make it a lot clearer what gets modified where.

I'd be very interested to hear what others think though!

rjfarmer commented 2 years ago

I agree state is impossible to reason about in star/. I think there's room to consolidate your proposal (maybe?). As well as plenty of bike-shedding over names :)

Each set on inputs (star_job, controls, pgstar) should just be moved to their own s% struct (star_job is already done. So that would be a s%star_job, s%controls, s%pgstar. This would be mostly just to tidy things up and make clearer what something is. That handles the 'meta' option. We can also put all the pgstar data in a s% pgstar struct.

Things like v_flag should just get moved into s% star_job as that's the place it is controlled from.

I think for the other state the better way(?) is think about things are:

  1. Things that must survive between steps (xa, xh): s% state
  2. Things that are valid only for that timestep (lnR, R, eps_mdot): s% step.
  3. Things that need saving between steps (*_start or mlt_vc_old): s%start

If step and start where the same type we could just do some pointer juggling so that at the end of the step we just move the s% step pointer to s%start and make a new s% step struct. That way s% step must always be reset for each step and start is just the last step (some stuff in start might need to be de-allocated to save memory).

For things you call output we do already have the ability to just call into the history/profile routines to get them (star_get_history_output and star_get_profile_output) and for things like center_ye do they even need to be stored? Should we just say you always need to do a lookup to get them?

adamjermyn commented 2 years ago

I think that's a great way to consolidate. More detailed thoughts:

Each set on inputs (star_job, controls, pgstar) should just be moved to their own s% struct (star_job is already done. So that would be a s%star_job, s%controls, s%pgstar. This would be mostly just to tidy things up and make clearer what something is. That handles the 'meta' option. We can also put all the pgstar data in a s% pgstar struct.

Perfect!

If step and start where the same type we could just do some pointer juggling so that at the end of the step we just move the s% step pointer to s%start and make a new s% step struct. That way s% step must always be reset for each step and start is just the last step (some stuff in start might need to be de-allocated to save memory).

I really like this idea. I think it'll simplify things a lot. I'd be inclined to not de-allocate at first (for simplicity). If we discover there's a memory issue we can always add that down the road.

For things you call output we do already have the ability to just call into the history/profile routines to get them (star_get_history_output and star_get_profile_output) and for things like center_ye do they even need to be stored? Should we just say you always need to do a lookup to get them?

They do not need to be stored. Let's make them all calls on the fly.

Things that are valid only for that timestep (lnR, R, eps_mdot): s% step.

I think it might be good to further subdivide these into things only valid for a step and things only valid for an iteration. For instance eps_mdot gets set in the implicit mdot loop and is only valid for a step, whereas lnR is only valid for an iteration. Thoughts?

One complication is we actually have two loops:

So in principle there are three kinds of variables we might want:

  1. Valid for the whole step (though I think all of this might end up in start and the various controls structs).
  2. Valid for a given mdot (e.g. eps_mdot).
  3. Valid for a given iteration (e.g. lnR).

But I'm not sure what the right way to handle this is...

Thoughts?

rjfarmer commented 2 years ago

In terms of the two loops, how much is actually changed in the mdot loop? I thought it was just mdot and eps_mdot? Is anything else touched?

If we assume that everything in s%start was 0 at the start of the step, then it seems simpler to just set them into s%step at the start of the step. If we have to retry the step (and get a new mdot) we would be throwing s%step away anyway.

For solver iterations do we keep any information between iterations? We could have a s%step%iter and s%step%prev_iter if we just need the previous iteration. Or have a s%step%iter() array if we need a variable number of iterations saved (saved as in we need all information not just a distilled summary).

adamjermyn commented 2 years ago

I think a fair amount gets re-evaluated each mdot loop. It's mdot and eps_mdot and element diffusion and rotation and premixing/predictive mixing, I think.

If we assume that everything in s%start was 0 at the start of the step, then it seems simpler to just set them into s%step at the start of the step. If we have to retry the step (and get a new mdot) we would be throwing s%step away anyway.

Not sure what you're saying here?

For solver iterations do we keep any information between iterations? We could have a s%step%iter and s%step%prev_iter if we just need the previous iteration. Or have a s%step%iter() array if we need a variable number of iterations saved (saved as in we need all information not just a distilled summary).

We don't keep info from prior iterations other than the current state, but there are lots of things that we set in previous iterations that need to be updated each iteration (e.g. lnR, T, ...). I'd ideally like to make that as explicit as possible so we don't have issues where variables either slip through the cracks and don't get updated, or else where they do get updated but in a monolithic set_vars call that no one understands.

So I think it'd be good to have some way of (1) gathering all the data that get updated on a per-iteration basis in one place/struct and (2) invalidating that information at the end of each iteration. I think your proposal of s%step%iter is all we need for that?

rjfarmer commented 2 years ago

Me neither I entirely forgot that we don't just set mdot but actually take the mass off during the first loop. Also I forget about everything else that happens then like diffusion.

We might need to think about making some sort of document that, at least in broad strokes, documents what should be set/altered when. Because I seem to already be confused about what happens when.

orlox commented 2 years ago

This is also a good opportunity to make it much clearer what is and what is not kept constant through a timestep. This way we can have s% start% D_mix making it explicit for anyone seeing the code that this is not updated through iterations. But I'm not sure about s% step% iter, for a lot of stuff I think it would defeat the purpose to have things like 's% step% iter% lnR(k)' in the equations. In terms of the step structure I think we currently have something like:

I'm not sure if there are more post-adjustment steps besides rotation. If there's not then I think it is a good time to see if we can do away with the bizarre split of rotation. Other than that then I think we can work with (reused some names from both of your suggestions):

I think the start and end values would be useful for cases where it is unclear why there are apparent inconsistencies in the output, particularly with mixing. In this sense start keeps a record of the quantities that were actually used in the equations while end tells you what you'd actually have if they were evaluated at the end.

adamjermyn commented 2 years ago

But I'm not sure about s% step% iter, for a lot of stuff I think it would defeat the purpose to have things like 's% step% iter% lnR(k)'

Remember that we can pass these sub-structs directly to subroutines, so we could directly pass the iter struct and then call iter%lnR.

If there's not then I think it is a good time to see if we can do away with the bizarre split of rotation.

I agree. Is the idea to fix up the new rotation equation with auto_diff? ;-)


Regarding your suggestion:


A new suggestion:

  1. Things set by inlists and immediate derived quantities (e.g. v_flag) go in s%star_job, s%controls, s%pgstar.
  2. Things that must survive between steps: s% state (xa, xh, mlt_vc). mlt_vc sits here because TDC needs it to survive.
  3. Starting values and things that get set once per step (before the mdot loop): s%start (e.g. lnd_start).
  4. Things that get set during the mdot loop but held fixed during iterations: s%start_iter (eps_mdot, etc.)
  5. Things that get set during iterations and are only valid for one iteration: s%iter (lnR, etc.)

The control flow is then a lot simpler:

Thoughts?

orlox commented 2 years ago

Remember that we can pass these sub-structs directly to subroutines, so we could directly pass the iter struct and then call iter%lnR.

I know that, was bringing it up because I did not see the benefit of having that additional sublayer of information. Thought the original idea from Rob was regarding whether we want to preserve the data between iterations. I do not think we need that information.

I agree. Is the idea to fix up the new rotation equation with auto_diff? ;-)

It is, but through the last year I've become progressively bad at estimating when I'll have time for significant dev work. I would make an other category for parts that right now require special treatment, but is our target to remove completely in the future.

  • I don't understand the state/struct split. What's the conceptual notion there?

Was thinking of state containing the minimally required information to describe the model, and struct to be anything derived from these quantities and ensured for the end user, or through the majority of the private code, to be consistent with the data in state (of course, it cannot be consistent during an operation that modifies xa or xh). Choice of names in what I suggested might not be the best.

  • I'm not sure I trust 'update religiously after...'. That feels like a quick way to get back to where we are now with monolithic set_vars calls. I'd much rather split by when things get updated, which makes it easy to (1) invalidate old results and (2) check whether we've correctly updated everything at the right time.

I think a lot of the confusion of state comes from not understanding when quantities are off-sync with xa and xh. This is present in the current monolithic call. There is a multipurpose set_vars_if_needed but we also have the subroutine set_hydro_vars which is called in various parts with specific indications of what to evaluate and what not. In my suggestion this would be replaced with a single subroutine with no options that would modify everything it is concerned with. The number of calls to such a subroutine should be very small.

For any operation that we have previous to the iterations that modifies these fundamental properties (mass adjustment, remesh, element diffusion, rotation, split burn), shouldn't we update most of the information for it to be available for the following operation?

  • I don't think we should have s%end. If we want something available for output and it only depends on the state at the end of the step we can evaluate it on the fly and don't need to store it.

Here was thinking on just explicitly repeat the operations that go into s% start but loading them into s% end instead (they would be the same type). Feels like a straightforward thing to do and have for debug, but also to shut off by default. But its true that its uses can be very limited.


Comments on your specific suggestion:

  1. Things set by inlists and immediate derived quantities (e.g. v_flag) go in s%star_job, s%controls, s%pgstar.

Think we all agree on this one.

  1. Things that must survive between steps: s% state (xa, xh, mlt_vc). mlt_vc sits here because TDC needs it to survive.

Rather than defining this as "things that must survive" I'd rather think of it as the minimal set required to describe the state of the model. I would then add j_rot into it (though long term I want it to be just an extra part of xh). So I agree we need this, just with a different definition.

  1. Things that get set during the mdot loop but held fixed during iterations: s%start_iter (eps_mdot, etc.)
  2. Things that get set during iterations and are only valid for one iteration: s%iter (lnR, etc.)

These two are the parts where I think there can be a lot of confusion. For example for our current situation where we have diffusion coefficients kept constant through a step (so they would end in start_iter), we would like to make use of things that in this scheme would fall into iter, for example lnR. These quantities in iter, which are just directly derived from state are also needed much furher back in the step as well, for instance for remeshing. That's why I think its ideal to have a set of data that we take care (religiously as I said before) to keep in sync with the quantities in state.

So within your suggestion, I'd just change s% iter to be something that contains data derived from s% state and ensured to be kept in sync with it. This then includes it being consistent through Newton iterations.

adamjermyn commented 2 years ago

I know that, was bringing it up because I did not see the benefit of having that additional sublayer of information. Thought the original idea from Rob was regarding whether we want to preserve the data between iterations. I do not think we need that information.

Ah totally agreed! Sorry about that.

It is, but through the last year I've become progressively bad at estimating when I'll have time for significant dev work. I would make an other category for parts that right now require special treatment, but is our target to remove completely in the future.

Ok.

Was thinking of state containing the minimally required information to describe the model, and struct to be anything derived from these quantities and ensured for the end user, or through the majority of the private code, to be consistent with the data in state (of course, it cannot be consistent during an operation that modifies xa or xh). Choice of names in what I suggested might not be the best.

I'm fine with that split existing. I think between the three of us we've pointed at two distinctions:

  1. When does this data get set?
  2. What (conceptually) is this data? Your state/struct split is based on (2), I think Rob's suggestion was more looking at (1), and I've been back and forth between them (currently leaning towards (1))...

I think a lot of the confusion of state comes from not understanding when quantities are off-sync with xa and xh. This is present in the current monolithic call. There is a multipurpose set_vars_if_needed but we also have the subroutine set_hydro_vars which is called in various parts with specific indications of what to evaluate and what not. In my suggestion this would be replaced with a single subroutine with no options that would modify everything it is concerned with. The number of calls to such a subroutine should be very small.

If we end up with just a single call to this routine per iteration I'm happy. But it can't become a thing that we call just to cover ourselves.

For any operation that we have previous to the iterations that modifies these fundamental properties (mass adjustment, remesh, element diffusion, rotation, split burn), shouldn't we update most of the information for it to be available for the following operation?

Yes.

Rather than defining this as "things that must survive" I'd rather think of it as the minimal set required to describe the state of the model. I would then add j_rot into it (though long term I want it to be just an extra part of xh). So I agree we need this, just with a different definition.

I like this clarification. The minimal set of things that we need to define the state is a good description. And indeed whatever rotation stuff doesn't live in xh should go here.

These two are the parts where I think there can be a lot of confusion. For example for our current situation where we have diffusion coefficients kept constant through a step (so they would end in start_iter), we would like to make use of things that in this scheme would fall into iter, for example lnR. These quantities in iter, which are just directly derived from state are also needed much furher back in the step as well, for instance for remeshing. That's why I think its ideal to have a set of data that we take care (religiously as I said before) to keep in sync with the quantities in state. So within your suggestion, I'd just change s% iter to be something that contains data derived from s% state and ensured to be kept in sync with it. This then includes it being consistent through Newton iterations.

Ok, I agree. I still want to call it iter, I think, to emphasize that it gets updated every iteration. But we can have this one get set at the outset (inside the mdot loop, before the first iteration) and then get updated once per iteration after that.

Sound good? I think we're converging...

orlox commented 2 years ago

Think we're converging too. Don't mind keeping iter as the name even if it is updated at other places. A call to update the values on iter would happen after (not written in their order in a step, just the ones that pop to mind):

And as element diffusion, split burn and am transport use substeps, they should keep things consistent between substeps (or perhaps they have an additional layer of complexity that requires a different split between start and modified values...). It's still a sizable amount of variable adjusting calls, but I think it's a reasonable amount to manage. It might be a bit wasteful to do things like repeated EOS calls that won't be used much (guess we do that now anyways). But I'm happy to pay a reasonable performance prize in exchange of certainty that things are properly synced, and that developers suffer less headaches from MESA state confusion.

One small detail I think we'd actually want to keep in state would be the different variable flags. Having v_flag = .true. is part of the state of the star, the same goes for the network and indices that indicate where in xh is each thing. I do not see v_flag (or equivalents) as part of star_job, star_job just has various options to modify flags that define the state of the star. If we keep all this info then retries/redos could be made much more straightforward as well, just keep a full copy of state in state_start and revert to that. This would undo remeshing, which then makes it a bit wasteful as it would repeat the same remesh process. But retrying should be a relatively minor amount of time in any reasonable run to begin with (though binary does plenty of redos as part of its implicit loop to get mass transfer rates). Simplifying the retry process can also prevent nasty photo checksum problems produced by missing to update some bits during retries. Keeping all this info in state also means that saving photos or models just requires storing the info in state.

warrickball commented 2 years ago

This is an excellent and important discussion and I think we should encourage/spread the discussion more widely within the team. I think we've learned that design choices deep within MESA gather a lot of inertia, so it's worth getting the wider perspective of those of us peering in from the outside before making drastic changes that'll see us adding more % to every variable.

rjfarmer commented 2 years ago

Yes this certainly needs buy in from the other developers before we start changing anything. I don't really want us to be making the changes before the release (unless entirely down in a separate branch, that isn't merged until after we release). So there is still plenty of time for discussion and to finalize details.

adamjermyn commented 2 years ago

Fully agreed that this needs broad discussion, and that it shouldn't happen before the release (except, as Rob says, in a branch). It'll be a decent amount of work too, so I don't think anyone's going to just go off and try it lightly.

Re @orlox: That all sounds good to me. I don't think I agree with your full list though:

I agree that the rotation calculations that happen after the newton solve will need an extra iter setting.

adamjermyn commented 2 years ago

As summary of where we are for further discussion, the current proposal I believe is to define five sub-structs that live inside the star pointer. These would divide up the current state of star_data as follows:

  1. Things set by inlists go in s%star_job, s%controls, s%pgstar.
  2. The minimum state needed to define the model (which must survive between steps): s% state (xa, xh, mlt_vc, flags, rotation info).
  3. Starting values and things that get set once per step (before the mdot loop): s%start (e.g. lnd_start).
  4. Things that get set during the mdot loop but held fixed during iterations: s%start_iter (eps_mdot, etc.)
  5. Things that get set during iterations and are only valid for one iteration: s%iter (lnR, etc.)

The idea is that (1) is what we read from inlists, (2) is what we need to restore/set when we load/save models, have retries/redo's, etc., and then (3-5) are different kinds of intermediate data that we compute at different stages in evolve, organized by when they get set/updated. This should make it very clear which data are updated when, and make it easy to ensure that everything is synchronized appropriately. This would also let us get rid of the monolithic set_vars calls (which do different things depending on various flags, and which touch lots of variables each call) and replace them with more legible routines that target specific stages of evolution.

Outputs would be calculated on-the-fly at the end of the step. Though we need to think more about items that need saving/accumulating for output purposes (e.g. num_tdc_iters).

warrickball commented 2 years ago

It would be amazing if someone could produce a flowchart for the various loops and logical constructs, something like Josiah's flowchart for showing where the extra procedures in run_stars_extra.f90 are called:

adamjermyn commented 2 years ago

Something like this? (Happy to iterate on this...)

Screen Shot 2021-09-28 at 12 27 25 PM
warrickball commented 2 years ago

Thanks! This is definitely a good start for me and already raises some thoughts, some of which I could probably answer by looking at the source myself but I'd still turn to those who've mucked around in MESA's depths to hope to understand what is (and should be) going on.

Anyway, given this structure, your most recent set of variable classes makes sense in that it aligns with the logic of what the code is doing.

rjfarmer commented 2 years ago

I think the relax is in its own section as we don't want retries to undo the work, while everything else in part2 needs redoing after a retry.

adamjermyn commented 2 years ago

The implicit_mdot_loop is only a loop if we set mdot implicitly.

Rotation is currently not implicit but could become so (I think @orlox is/will be working on this).

I just had a quick call with @evbauer about other things, and we talked about the question of "Does s%state get touched during Newton iterations in this proposal?" Similar questions pertain to other stages (e.g. does conv_premix alter the state variables?). I could see it going either way:

I lean towards 'no' here but it's a very slight leaning. And there could be other options apart from yes and no...

rjfarmer commented 2 years ago

I think state shouldn't get touched during a step. It should be the clean starting point. We already kinda of copy state when we make the convience arrays like lnR (atleast for xh) though xa might need copying.

If I want throw away the entire step (like in a retry) I should be able to just start from just state without knowing anything other than a new dt.

adamjermyn commented 2 years ago

Ok. So then to summarize where we are:

The proposed division is:

  1. Things set by inlists go in s%star_job, s%controls, s%pgstar.
  2. The minimum state needed to define the model (which must survive between steps): s% state (xa, xh, mlt_vc, flags, rotation info). This only gets modified at the end of a step and on model/photo loading.
  3. Starting values and things that get set once per step (before the mdot loop): s%start (e.g. lnd_start).
  4. Things that get set during the mdot loop but held fixed during iterations: s%start_iter (eps_mdot, etc.)
  5. Things that get set during iterations and are only valid for one iteration: s%iter (lnR, etc.). This includes a copy of xa and xh that we can modify.

With this, the control flow of a step looks like:


A slightly cleaner (but more expensive) approach would be:

  1. Things set by inlists go in s%star_job, s%controls, s%pgstar.
  2. The minimum state needed to define the model (which must survive between steps) go in a state struct (xa, xh, mlt_vc, flags, rotation info).
  3. Starting values and things that get set once per step (before the mdot loop): s%start (e.g. lnd_start). This includes s%start%state.
  4. Things that get set during the mdot loop but held fixed during iterations: s%start_iter (eps_mdot, etc.). This includes s%start_iter%state.
  5. Things that get set during iterations and are only valid for one iteration: s%iter (lnR, etc.). This includes s%iter%state.

This requires us to hang on to more copies of the state data, but if we do that the control flow simplifies:

In this version, the state begins in start and we always have that copy to restore if we need to. It gets passed to start_iter and modified with adjust_mass, and thereafter plays the same role as something we can restore if we need. It then gets passed into iter, which we use to iteratively build the new state in the Newton iterations. If we accept the end result and exit both loops then we can just copy this state back into start and nullify the rest of start. Otherwise we just throw out everything we just did, retain start%state, and try again.

Conceptually this let's us think about each struct as having a state part and a working part (though I wouldn't advocate making the latter its own sub-struct... that's just too much). And all that matters is that we only touch the state part in the right places, and don't rely on the working part out-of-scope (which we can enforce by nullifying it at the appropriate junctures).

Thoughts?

warrickball commented 2 years ago

I'm not in a terrible rush to establish things at this level of detail but I think the second scheme is what I imagined. I don't think the memory overhead is large (at least not in my science). We're talking about array of the size of xh plus xa (plus a few other bits), which even for 10 000 points and 100 species is of order 1 million floats, so ~8 MB, right? In my case it's more like ~3 000×25=75 000 floats, so not even 1MB.

But I also don't see much difference between the two schemes except some naming differences. I don't see why we need to copy s%star_iter%state into s%iter%state. Why can't we keep working on star_iter in the Newton iterations? We need the stellar model we start with, which we won't touch, and a copy to work with. Once we've finished converging the working model, that becomes the new starting model. If we fail to converge, we start again by copying the (old) starting model to the working model and trying a smaller timestep. Both schemes above appear to handle three models but the model being worked on only ever falls back to the starting model, not the mass-adjusted one.

The real performance problem I expect is when someone starts down this road and discovers we've deleted some array of logs or exps that were being cached for efficiency.

warrickball commented 2 years ago

My brain has had this on the backburner for a few hours and I think my confusion about naming is because I'm confusing what copies of the "star" we need with the substructure we want to use to distinguish when different variable are or should be set or safe to manipulate.

adamjermyn commented 2 years ago

We sometimes need to compute finite differences in time between the end of adjust_mass and the current value inside the Newton iteration loop. This can be done by storing a copy of the state in the start_iter struct, or by just copying the bits we need to finite difference, but somehow we need to store some data from after adjust_mass and before the Newton iterations have taken place...

rjfarmer commented 2 years ago

Ideas for a plan of operations:

Stage 1:

Stage 2:

Stage 3:

adamjermyn commented 2 years ago

I like this plan. And just to clarify, during stage 2 we're moving things based on the intended time-at-which-it-is-touched (so that we can make that change later in stage 3), right?

rjfarmer commented 2 years ago

Yes or at least our best guess.

warrickball commented 2 years ago
  • Start moving outputs to on-the-fly calculations

I thought I'd open an issue with some examples of how this might play out. I expected some cases where something is computed deep in star and only used to write histories or profiles but I didn't see any. The simplest cases I found were things that are used both in the history output and as timestep controls. e.g. center_degeneracy and friends, which are set in report.f90. These aren't just the value at the central cell but rather averaged over center_avg_value_dq, defined in &controls (default=1d-8). There are others (e.g. he_core_mass) that are likewise set somewhere else and used for both purposes.

Which all means I'm now less clear about what to do. Do we just pile these into another structure called report or something (though perhaps that clashes with the module; maybe info)? We probably at least need something to make it clear that these are set after a timestep for either output or timestep control.

I'm now disinclined from computing on the fly because the calculations are less trivial, several things are computed together, and once we refactor both uses (output and timestep control) we'd probably end up quite close to where the code already is.

rjfarmer commented 2 years ago

When I say outputs I really means anything than can be derived from the stellar structure, so yes things like core masses, center/surface/min/max/avg values but doesn't have to only be things we output in history/profile. These should not be put in another structure. Say we where saving the mass coordinate of max_log_T that will depend on whether your looking at the start/end, or a intermediate point in the step.

So we need a function call that would take the mass and temperature array (but not s or s%id) as arguments and return the mass coordinate. That way it can be computed at any point in a step. A lot of these things probably already have a function that computes them, we just want to a) remove their dependency on s and b) remove the s% saved_version of them.

rjfarmer commented 2 years ago

Look for instance at the problems we had in #81 with delta_nu/nu_max. Is there really any need for something like delta_nu/nu_max to be computed at the start of a run? No but it is/was because it ended up in some monolithic report call that gets called in places no one expected it to be called.

So wouldn't it be nicer if delta_nu was just computed as and when (and even if) someone actually needed it? Where we can be more certain that its inputs have actually been set at this time.

warrickball commented 2 years ago

Agree. Some of the cases I came across are already most of the way there: there's a function to evaluate them but it only gets called once before the report/choose timestep ops between steps.

rjfarmer commented 2 years ago

Bikesheding time, if we move controls into its own derived type, what do we call it (knowing that it will be everywhere):

  1. s% controls
  2. s% control
  3. s% con
  4. s% cnt
  5. s% c
  6. something else?
adamjermyn commented 2 years ago

I’d vote for (1,2,3). But I tend to prefer longer descriptive variable names.

-Adam

On Wed, Mar 16 2022 at 11:55 AM, Robert Farmer @.***> wrote:

Bikesheding time, if we move controls into its own derived type, what do we call it (knowing that it will be everywhere):

  1. s% controls
  2. s% control
  3. s% con
  4. s% cnt
  5. s% c
  6. something else?

— Reply to this email directly, view it on GitHub https://github.com/MESAHub/mesa/issues/328#issuecomment-1069286056, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABPR5H6AMZ2ITATKZFYS3QTVAIAA5ANCNFSM5EWZIW7Q . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>

evbauer commented 2 years ago

Ok, this is for sure bikeshedding, but my preference would be s% cont or s% ctrl. I'd like it to be reasonably short, but s% con seems pretty unclear and s% cnt would seem to refer to a 'count', so I think cont or ctrl is about as short as it can be while maintaining clarity.

warrickball commented 2 years ago

Another option might be s% ctl but I'll vote for s% ctrl or s% ctrls, for which there's precedent through star/private/ctrls_io.f90.

earlbellinger commented 2 years ago

I also like s% ctrl

wmwolf commented 2 years ago

Another vote for s% ctrl from me, but with no strong feelings.

rjfarmer commented 2 years ago

Looks like s% ctrl won

rjfarmer commented 2 years ago

Something to think about for moving "outputs" into their own functions, is how are users going to access them in their run_stars_extras?

  1. We make the functions public. Probably want a new file in star/public as star_lib.f90 is already un-mangeable.
  2. We add the functions as attributes in the derived types.

So for example, say we want the he_core_mass it becomes either:

  1. call he_core_mass(inputs)
  2. s% state% he_core_mass(inputs) or s% iter%he_core_mass(inputs)

I'm leaning on option 2 as it will be clear where the he_core_mass is being computed from. We could make a convenience version in s (s% he_core_mass()) that users can use when they don't really care exactly when its computed (so it would be from xa/xh at the start of the step). With 2 we could hide a lot of the internal details of how its actually done if its a method inside s. The only downside with 2 is that we may have a lot of code repetition (depending on how we structure the arguments).

adamjermyn commented 2 years ago

I'd vote for approach (2). And in particular (as we just talked about in person) I think it'd be good to have some room for inputs so that e.g. iter doesn't need to itself contain a pointer to ctrl.

I don't think we need to have code repetition to make this work. We can have internal routines in e.g. star_utils that implement the actual calculation, and then wrappers for reporting those results out versus for internal use, which let's us keep those interfaces separate.

adamjermyn commented 1 year ago

Oh absolutely ctrl!

-Adam

On Wed, Mar 16 2022 at 1:29 PM, Earl Patrick Bellinger < @.***> wrote:

I also like s% ctrl

— Reply to this email directly, view it on GitHub https://github.com/MESAHub/mesa/issues/328#issuecomment-1069383372, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABPR5H5UIO7PRAJBERGNAMTVAILA5ANCNFSM5EWZIW7Q . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you authored the thread.Message ID: @.***>