Closed cjosey closed 8 years ago
I'm generally supportive of your proposed changes, @cjosey . As we discussed in person, moving to a true Nuclide
data structure that is independent of temperature (but still may contain temperature-dependent data, e.g. cross sections for the nuclide at different temperatures) is an important first step that will streamline the integration of various data processing methods currently under development.
The proposed data_sources.xml file seems like it would enable simplified input in a fairly straightforward way. A few questions:
data_sources.xml
points to multiple sets of the same type of data at a single temperature? Require the user to specify the extension of the specific library via the xs
tag?exact ace
for all cases in settings.xml and then, for a single nuclide in materials.xml, give e.g. a multipole xs
tag? It seems so, and that would be useful.In response to your questions:
<data_source>
tag is used. It will error out if no data is available. This may change if a general temperature sensitive library that works for all nuclides becomes available. Right now, however, using temperature dependent data requires a bit of "expert judgement" and will require deliberate action to enable.name
tag to data_source
(for example <ACE name="ENDF-B/VII.1" />
), such that I could have a use_library="ENDF-B/VII.1"
tag in settings.xml
? If a nuclide doesn't exist in said library, it would error out, but the temperature lookup would still work as anticipated.xs
overrides any other tag, but I'd need to think of a clean way to handle the fact that the multipole library only contains 2 of the 5 components.A few thoughts I had as I was reading the thread here:
<provides>
block is really telling you.I agree that temperatures ought to be assigned to cells, but also, each cell should have its own nuclide list and number density independently. The question comes down to this: which of the following four use cases are most important?
I just thought going on a per-material basis would make 1, 3, and 4 user-friendly, and would itself be easy to implement and backwards compatible. To make a cell tag backwards compatible would be much harder to implement, as far as I could tell. Further, a proper TH simulation would require thermal mechanics, which would itself require separate materials for each cell anyways. I'm open to suggestions.
While it is true that the temperature of any given library isn't uniform ( the very strange t16_2003
library being a good example library where it is not), I wrote it this way as there are no OpenMC functions that make it easy to create doubly-keyed dictionaries to make scanning cross_sections.xml
easy. I do not have the programming skill to implement such a structure, so I would probably do some brute-force search of Listings
. Of course, it's still essential for the sab
data, I realize, so something will have to be done regardless, making the contents of the <provides>
section is redundant. The python scripts that generate cross_sections.xml
would have to be updated with S(a,b) temperatures though. At least as of the version that generated mine, it just populated the temperature with zeros for S(a,b).
As for breaking apart data, this is a manpower issue. If you want a uniform MIT library to publish, the best case scenario is finding someone with at least one year of free time and sufficient insanity to consider that free time well wasted on replacing NJOY. A more realistic time-frame for something like an OpenXS code would be around 3-5 years. Nobody has any interest in funding such a project (as NJOY exists + the general anti-FLOSS sentiment in the nuclear community), so it would have to come from the coordination of PhD student free time. As such, I do not believe infrastructure now would be wasted, as we are so far away from any other solution.
There are two other options if you don't believe the libraries are ready for incorporation. It can be implemented much like resonance upscattering (essentially hidden away with minimal/no user interface such that only people who know it exists can use it as it requires special data), or I can maintain a forked version of OpenMC designed only to operate on fully coupled problems, with no user interface or backwards compatibility for simpler problems.
I strongly support the simple implementation as an hidden feature as to avoid holding up some of our current projects (Sterling, Sam and Matt need this feature in the very near future). I agree that we ideally would want temperature on the cell, but in the short term we can live with it on materials. We should however not invest too much resources on restructuring until we have a bigger plan in place. One more topic for discussion when Paul visits in December!
Could we just have the user declare all of the XS data sources for each nuclide, and then infer from that what kind of temperature dependence they want? Like:
<!-- This will give you a nuclide using one ACE table at one temperature; the old-fashioned way -->
<nuclide name="U-238" ao="1.0" xs="71c" />
<!-- This will give you interpolation between ACE files (if we implement such a thing) -->
<nuclide name="U-238" ao="1.0" xs="71c 72c 73c" />
<!-- This will give you multipole w/ ACE for the angular -->
<nuclide name="U-238" ao="1.0" xs="mit01 71c" />
I also vote for having a temperature tag on the cells. In the case where we have TH simulations w/o depletion, then this will allow us to reuse materials so we don't have to duplicate lists of nuclides.
<!-- Cell with one material that has no temperature-dependant data -->
<cell id="1" surfaces="-1" material="1" />
<!-- Cell with one material, one temperature -->
<cell id="1" surfaces="-1" material="1" temperature="600" />
<!-- Cell with one material, many temperatures -->
<cell id="1" surfaces="-1" material="1" >
<distributed_temperatures>600 630 620</distributed_temperatures>
</cell>
<!-- Cell with many materials, many temperatures -->
<cell id="1" surfaces="-1" >
<distributed_materials>1 2 3</distributed_materials>
<distributed_temperatures>600 630 620</distributed_temperatures>
</cell>
@smharper I actually really like that concept. I imagine it would get unwieldy if you try to specify a temperature for all tens of thousands or so fuel pins though. What if we specified temperatures similarly to lattices? I'm not really sure how to pull it off though...
@bforget I understand the sentiment, but I'd rather give this a few more days of discussion. If we come up with something that can be done reasonably fast, I much prefer to implement it. My reasoning is that if we don't have something that can be, at the very least, automated, my MP implementation will eat other student's time just in the cumulative time of fiddling pieces about. Further, no one has been exactly clamoring at me demanding that they needed it now. If I were the holdup, they should've mentioned it to me. :P
I thought some more. I realized that we are much closer to a unified library than I thought. I had originally thought that we'd have to write a whole code and merge all these components, etc. etc. but in fact, speaking with people, the components will exist piecemeal pretty much by the end of the semester. I realized that if I can get everyone to output in HDF5, then we can concatenate files and have a library. The only component that is needed but not currently completed or very close to completed is angular distributions and my understanding is that that is fairly trivial (except if we derive it quantum-mechanically, which I'm not sure any published data even does that due to the myriad of caveats).
I still think it would be valuable to individually select which components to use though. For example, say it's 3 years down the road and the MIT library has all 5 components. However, angular Doppler broadening is slow. Would you not want an option to switch it off and use another library to see if your particular problem even needs angular broadening? It's a bit contrived, but it is a legitimate use case that corresponds with our current need to piecemeal the data.
The next thing is that I agree that a "no depletion, TH" problem would be best served by specifying temperatures on a per-cell basis instead of duplicating materials everywhere. With around 400 nuclides, you're looking at 3kiB a cell. There can be hundreds of thousands of cells. A TH problem would then quickly run out of RAM without domain decomposition. Sure, I'd argue that "you're not doing it right", but I can see where it's coming from.
So, based on discussion so far, the data_sources.xml
could probably be simplified to:
<sources>
<data="ENDF/B-VII.1, MCNP6.1">
<type = "ace"/>
<location>
endf71x/cross_sections.xml
</location>
<provides rrr="true" urr="true" fast="true" angular="true" sab="true"/>
</data>
<data name="MIT v0.0.1">
<type = "multipole"/>
<location>
multipole/cross_sections.xml
</location>
<provides rrr="true" fast="true" />
</data>
</sources>
(provides
might be further subsumed into the type
) and we could scan the cross_sections.xml
file. Python generators would have to fully specify temperatures for S(a,b). We may want to add a units tag to cross_sections.xml
. I'll need some help figuring out a good way to find the Listings
index given a ZAID and a kT.
Now for input files. Is it a reasonable assessment to say that the following list are the set of requirements?
I'll think about it some more. There's no need to rush this.
An argument in favor of a coordinated piecemeal integration of data processing capabilities:
The 5 components mentioned (Sab, RRR, URR, fast, secondary angular) don't constitute a complete processed library. At a minimum, for neutrons, we'd need to consider secondary energy and correlated angle-energy distributions. Looking forward, we also may have to deal with covariances. And this is not mentioning photons or charged particles. So, in truth, any course we take is going to result in a piecemeal implementation. As long as there's decent coordination, and everyone developing the major data processing components signs off on the framework, piecemeal implementation should work fine. And if it helps people get work done in the meantime, all the better.
Regarding the ability to pick and choose different components from different libraries, I don't see a tremendous advantage in having complete flexibility. Where possible, we should try to use the same underlying data, and then control how it is processed via the physics models requested by the user, rather than dipping into a different library. Taking the example of broadened angular distributions - which will not be computationally prohibitive - we simply start from a single set of 0K data, and then perform/disable broadening at the request of the user.
Regarding requirements:
The decisions on how we integrate all these data capabilities will take some careful thought. If everyone is ok with it, maybe we ought to postpone some of the discussion until December when I make it out to Boston. @cjosey It sounds like you are not too rushed, but please chime in if you think there are aspects that we should be making decisions on earlier than December.
Also, @cjosey I'm happy to help figure out an appropriate data structure for listing look-ups. You basically want to give a tuple of (nuclide, temperature) and get the listings index, yeah?
@cjosey would you consider this Issue closed by #712, #720 and others?
Yup, pretty much. There's a few more algorithms we need to merge once they're published, but the framework's all there now.
This issue is to discuss how to progress with the ability to handle temperature dependence in OpenMC. I spent many days debating and tinkering with code to see what would be most minimally invasive. I propose the following:
First, material inputs will be modified. Now, specifying the
xs='80c'
, etc. tag will no longer be required. For each material, a temperature can be specified. It will be set up so that specifying a precise library (092238.80c) takes precedence over specifying a temperature, which takes precedence over specifying adefault_xs
. The format will be similar to density:<temperature units="K" value="293.6"/>
<temperature units="MeV" value="2.5301e-08"/>
Then, to facilitate this, a new input file will be needed. It will take similar station to
cross_sections.xml
as a file that is generated once and used in the background. I have written up a draft of one:The idea is that once we know a) what nuclides are needed, and b) what temperatures we have, we investigate this table to see if we have a dataset that provides a full cover over all nuclide/temperature combinations across the 5 components of a library (s(a,b), rrr, urr, fast, angular).
Of course, not all future libraries can provide a complete cover. So, I suggest one last tag in
settings.xml
. It will look like so, if one wanted to use windowed multipole once I finalize it:Of course, not everyone cares, or will bother, so by default if not specified, the code will interpret it to look like:
Further alternatives could include
MCNP6.1
for the polynomial data if someone wanted to implement that,interpolate method="sqrt-sqrt"
,interpolate method="boltzmann interpolation"
, etc.Finally, data will be reordered such that a
Nuclide
contains everything for, say U-238 from all data sources.There are two usability issues. One, people will need to generate a
data_sources.xml
(easy to automate using data from LA-UR-13-21822 or equivalent), and two, 293.6 K is not 2.5301e-08 MeV, but 2.53004879e-8, using the value ofK_BOLTZMANN
in the code, so"exact ace"
may require matching in Kelvin instead of MeV. Of course, adding a check if equal to within +/- 0.1 K would be easy.Thoughts? It covers nearly all the use cases (currently available and in-progress research) I can think of, interferes as little as possible, removes the need to constantly look up which S(a,b) table is which tag, and (should be, based on some test implementating) easy to implement in a short period of time. It is, however, essential that I get other nuclear data researcher's inputs before I carry onward.
Edit: Unmangled title.