openmc-dev / openmc

OpenMC Monte Carlo Code
https://docs.openmc.org
Other
781 stars 508 forks source link

Input/output changes for next minor release #458

Closed paulromano closed 5 years ago

paulromano commented 9 years ago

There are quite a few ways in which user input/output may change over time as new developments are added to the code. I think it would be good for us to start thinking about some of these changes and try to group them together as much as possible to limit the number of times we potentially disrupt users. To that end, I'm going to put up for discussion my own wishlist of changes to input/output and invite others to do the same.

  1. Extension of <cell surfaces="..." /> notation to include Boolean operations like union and difference. Further discussion on #60.
  2. Changes related to depletion
    1. Ability to run multiple steps, each with a flux calculation, in general. Of course, this is required for depletion, but it may be useful to have for other things as well. We could also generalize the step to do other things as well. So effectively, the user could define the run sequence.
    2. Ability to specify a material as depletable.
  3. Support for multiple particle types (source, tallies, particle production).
  4. Probably many changes to tallies due to #214
  5. Specify temperature for a material/cell. Cell probably makes more sense since temperature is not an intrinsic property of a material.
  6. For lattices and meshes, we should really use the term "shape" instead of "dimension" which is consistent with both Fortran (which has a shape() intrinsic) and numpy (whose array's have a shape attribute).
  7. Rename statepoint to simply "state". The "point" is a little superfluous.
  8. In statepoint files (or state files) and summary files, move away from using arbitrary constants for things like the run mode and instead just use strings. That should make it a little easier to maintain.
  9. Don't write a separate dataset with the size of arrays in HDF5 files if there is no need.

I'll keep adding more things here as I think of them. Some of these changes I'd like to target for 0.8 if others are in agreement.

cjosey commented 9 years ago

I have been planning to move forward on point 5 reasonably soon. Now that the HDF5 stuff is stabilizing, and the final windowed multipole library is being generated (eta 1.5 weeks for roughly 310 nuclides processed of ENDF71), I had planned on redoing my modifications to bring in windowed multipole and formally PRing it. This would require having temperature data available. I currently have it on material, the reasoning being that by specifying libraries (81c, 26t, etc.), we already do indicate a temperature in materials.xml, but this is far from optimal. A better longterm solution is to instead only indicate nuclides and atom densities in materials.xml, temperatures in geometry.xml, and the temperature-library mapping function either in cross_sections.xml or somewhere else.

As this work gets closer to completion, I'll be making an "in progress" PR, as I need to discuss with everyone on what to do about the 2500 line Faddeeva.cc/Faddeeva.c anyways.

paulromano commented 9 years ago

@cjosey Looking forward to your PR when you get to it.

Another one for tallies -- ability to specify arbitrary reactions by name, e.g. (n,gamma) instead of 102.

wbinventor commented 9 years ago

I like all of the changes you've enumerated here @paulromano. In particular, 9 would be nice to trim down the statepoint files to the bare essentials now that HDF5 is the standard.

smharper commented 9 years ago

I think we should eventually remove ids from our IO and completely replace them with name strings (see discussion on #372).

wbinventor commented 9 years ago

I'm not totally opposed to that idea, but we should consider how we might accommodate users who may wish to give the same name to multiple entities (e.g., tallies) for one reason or another.

On Sat, Sep 19, 2015 at 9:57 AM, Sterling Harper notifications@github.com wrote:

I think we should eventually remove ids from our IO and completely replace them with name strings (see discussion on #372 https://github.com/mit-crpg/openmc/pull/372).

— Reply to this email directly or view it on GitHub https://github.com/mit-crpg/openmc/issues/458#issuecomment-141670916.

Will Boyd Nuclear Science & Engineering Massachusetts Institute of Technology Cell: 423.413.8469 wbinventor@gmail.com

cjosey commented 9 years ago

I thought some more on point 5. I was thinking that instead of one solitary way to input temperature/material/density, I was thinking there should be two. The first would be the "user friendly" version which is similar to what we have now with the addition of some temperature flag somewhere. I'm not sure where would be best, but honestly, all the "simple" simulations I've seen at most segregate the problem into "water" and "not water". I would suggest using this to initiate initial conditions in coupled simulations and for running uncoupled simulations.

The second is the way that should honestly only be automated. Before a simulation, the geometry is loaded (through OpenCSG, perhaps?) and each unique cell is instantiated with the "user friendly" input. It is then output as some sort of HDF5 file that can be manipulated by a TH code beforehand, by a boron search to critical during, or by a depletion code afterwards. This file gets read in lieu of materials.xml, but if it's not present, it generates one for you.

Thoughts? The big issue is that there is such a vast range of complexities, but very few people need the middle ground between "the fuel is 600K and the water 550K" and "every cell is unique in temperature".

wbinventor commented 9 years ago

Didn't @nhorelik and @dereklax already implement something along the lines of the HDF5 capability for distributed temperatures? On Sep 19, 2015 9:08 PM, "Colin Josey" notifications@github.com wrote:

I thought some more on point 5. I was thinking that instead of one solitary way to input temperature/material/density, I was thinking there should be two. The first would be the "user friendly" version which is similar to what we have now with the addition of some temperature flag somewhere. I'm not sure where would be best, but honestly, all the "simple" simulations I've seen at most segregate the problem into "water" and "not water". I would suggest using this to initiate initial conditions in coupled simulations and for running uncoupled simulations.

The second is the way that should honestly only be automated. Before a simulation, the geometry is loaded (through OpenCSG, perhaps?) and each unique cell is instantiated with the "user friendly" input. It is then output as some sort of HDF5 file that can be manipulated by a TH code beforehand, by a boron search to critical during, or by a depletion code afterwards. This file gets read in lieu of materials.xml, but if it's not present, it generates one for you.

Thoughts? The big issue is that there is such a vast range of complexities, but very few people need the middle ground between "the fuel is 600K and the water 550K" and "every cell is unique in temperature".

— Reply to this email directly or view it on GitHub https://github.com/mit-crpg/openmc/issues/458#issuecomment-141723651.

cjosey commented 9 years ago

They probably have, but both the distribmats and distribtallies branches are gigantic modifications that haven't (apparently) been maintained in a long time. Getting the Doppler broadening rejection sampling stuff working (a large orphaned modification of approximately the same vintage) took 4 days of hand-merging, and it was entirely on files I was very familiar with, with an algorithm I'm very familiar with. Within 3 weeks, another patch (to tallies) modified components in ways that would have required a total rewrite to support rejection sampling again anyways. I imagine the situation is quite similar with those patches.

As such, I'll probably hold off on the multipole stuff until a course of action on temperatures is finalized. The library should be finished in a day or so, with a grand total of 312-316 isotopes processed. 291 are finished right now.

mellis13 commented 9 years ago

@nhorelik had put together a PR on his own repo for closedmc that had distribmats and domain decomposition functionality. This was current as of about ~1.5 months ago. For those with access to his repo via mit-crpg, it is the PR called "Helper PR in prep for DD merge to develop ". In that branch distributed compositions and densities were implemented. I made some comments on that PR, but @nhorelik just didn't have time to finish it up. It was in good shape, just missing some tests at the time.

I reimplemented a distribtemperature feature that @dereklax had coded up a long time ago. This was merged in with multipole and I've been maintaining this functionality for my own research work. I don't have a strong opinion on whether to put temperature with distribcell or distribmat (assuming that distribmat will even exist in the future), but I went with distribmat because I didn't want to start including physics information in the geometry file. At the time it seemed cleaner that way.

Anyways, I just wanted to point out that it might not take that long to update and merge some or all of @nhorelik distribmat work if that is a feature we want in the code.

cjosey commented 9 years ago

Ah, that explains it. I don't have access to that repo, and all the stuff I can access on mit-crpg's closedmc branch hasn't been touched since last November. A mere 1.5 months delta doesn't sound too bad. Either way, I'll hold back until APIs are stable, or at least the direction we want is well decided.

smharper commented 9 years ago

RE point number 2: I'd like to suggest an alternative where we don't implement multi-step solves in OpenMC itself, and we instead improve our API (Python for sure, maybe Fortran as well) so that codes outside of OpenMC can handle the multi-step abstraction. I think in the long run, it will make our jobs easier and allow us to be more flexible.

... I find it hard to back up that statement without writing a whole essay, but here's the main point: Rather than one monolithic code that does everything, I prefer a small library of interconnected codes that each focus on their own chunk of physics or level of abstraction. I think that such a system reduces bloat in input syntax and documentation, is easier to test and validate, makes it easier for developers to focus on their field of expertise, makes it easier for users to tune the software to their application, etc.

Of course, I'm coming from the perspective of the MOOSE ecosystem (and I hope that I can one day use OpenMC in that ecosystem) so maybe one of you guys with a different background will have different feelings.

paulromano commented 9 years ago

@smharper I agree in principle with what you're saying, although in the absence of a concrete proposal for how that would be done, I have difficulty imagining it myself. Let's say OpenMC were a library, call it libopenmc, that only performed static particle transport simulations, and that there were a separate library for depletion, libdeplete. Let's also assume that we extend our Python API so that the multiple steps needed for depletion could be performed from there. The question in my mind is -- what would an API that allows us to transfer data in memory from one library to the other look like (both for libopenmc and libdeplete)? The "in memory" part is where I start to stumble; if everything were transferred via files, then ok, the Python API is just a glorified wrapper around different executables essentially (a la MCODE, MOCUP, MONTEBURNS).

The other method I can envision is to have a library libdeplete that is called directly from within OpenMC. If we were to do that, it seems like OpenMC would necessarily then have to handle the multi-step abstraction. This is akin to the new ORIGEN API which is being used in a number of codes.

I do think that either way, if depletion is "part" of OpenMC, it ought to be a constructed in a library fashion to be reusable by other codes. CMake makes it exceedingly easy -- whatever source files constitute the "depletion" part are added to a library via add_library, and then that library is linked with OpenMC.

wbinventor commented 9 years ago

Philosophically, I agree with @smharper on this point of abstraction. However, I think this raises the more important point which is whether the development of OpenMC is / should be user-centric or developer-centric. Based on what I've seen, I think that the abstraction is very much focused on what developer's "theoretically" like from a software standpoint. Perhaps I'd be assuming too much to say that most users couldn't care less how beautiful the delegation of roles is between libraries used by the application as long as the 1) installation is super simple and 2) the inputs to the code are seamless and hide whatever layers of abstraction there may be.

I certainly agree with @paulromano (and I think @smharper) that the depletion engine developed for OpenMC should be reusable. But I'm not convinced that following this philosophy for software development "to a t" in OpenMC is sustainable for a (largely) student developed and supported code. Without a lot of careful thought as to the software design, the extra "degrees of freedom" for development permitted by a system of interconnected codes would make it much more difficult for the annual influx of new novice student developers to climb the learning curve and code something useful. In an ideal world I would agree with @smharper on this completely, but we don't live in such a world, so I would be a hesitant supporter of this approach.

smharper commented 9 years ago

@wbinventor, @paulromano, thanks for entertaining my idea and hammering it out with me. Even if we don't agree on details, it looks like we all agree on philosophy. That's pretty awesome. To answer @paulromano's question of what an API would look like, imagine I was writing my own depletion code. Here's sort of what I would want it to look like:

program pin_deplete
  use openmc: only MaterialsKernel, Material, GeometryKernel, Cell, Surface, &
                   TransportKernel, TallyKernel

  class(MaterialsKernel) :: mat_kern
  class(GeometryKernel)  :: geo_kern
  class(TransportKernel) :: xport_kern
  class(TallyKernel)     :: tally_kern

  type(Material), pointer :: mat
  class(Surface), pointer :: surf
  type(Cell),     pointer :: fuel_c
  type(Cell),     pointer :: mod_c

  mat => mat_kern % add_material('Fuel')
  call mat % set_density(10.3, 'g/cm3')
  call mat % add_nuclide('U-235', 'ao', 5.58e-4)
  call mat % add_nuclide('U-238', 'ao', 2.24e-2)
  ! More materials stuff goes here...

  fuel_c => geo_kern % add_cell('Fuel')
  call fuel_c % set_fill(mat_kern % get_material('Fuel'))
  gap_c => geo_kern % add_cell('Fuel-Cladding Gap')
  ! More cells go here...

  surf => geo_kern % add_zcylinder(0, 0, 0.39)
  call fuel_c % add_surface(surf, -1)
  call gap_c % add_surface(surf, +1)

  surf => geo_kern % add_zcylinder(0, 0, 0.40)
  call gap_c % add_surface(surf, -1)
  call clad_c % add_surface(surf, +1)
  ! More surfaces go here...

  ! We need to setup transport and tally stuff too.

  ! Main loop
  do while (not_done)
    ! (Re)initialize kernels
    call mat_kern % initialize()
    call geo_kern % initialize()
    call xport_kern % initialize()
    call tally_kern % initialize()

    ! Run a static OpenMC simulation
    do i = 1 % n_inactive
      call xport_kern % run_inactive_batch
    end do
    do while (tallies_not_converged)
      call xport_kenr % run_active_batch
      ! Check convergence criteria
    end do

    ! Adjust materials
    rxn_rate = tally_kern % get_tally('No Eout') % get_score('absorption') &
         % get_mean()
    old_density = mat_kern % get_material('Fuel') % get_nuclide('U-235') &
         % get_density()
    new_density = old_density - rxn_rate * delta_t / volume

    call mat_kern % remove_material('Fuel')
    mat => mat_kern % add_material('Fuel')
    call mat % add_nuclide('U-235', 'ao', new_density)
    ! More material stuff
  end do
end program pin_deplete

I sketched this out in Fortran, but I think it would be just as easy to write in C++ which would make it possible to write in Python. In the above example, we have a depletion code calling OpenMC. But it could have just as easily been another application code that is calling both OpenMC and OpenDeplete.

@wbinventor, I see your point, but I think it's important to keep in mind who our users are. As best I can tell, the primary users of OpenMC today are well... us (the folks at MIT CRPG and ANL MCS). So maybe developer-centric and user-centric are the same thing for now. Furthermore, I think our target group of users are the students and researchers who are interested in methods development (users who are also developers, I guess). And abstraction is very important to those users.

There is something of a middle ground here. We could implement a multi-step solver directly in OpenMC, but force it to use internal APIs like I sketched above. That way, another application linking to OpenMC could just short-circuit our multi-step stuff and go directly to the underlying static solver.

paulromano commented 9 years ago

@smharper Thanks for the sketch, and I think it's helping me to get a little more comfortable with the argument that you could do it in C++. It would mean essentially that we would need to mirror what's in the Python API (as you've implicitly suggested in your code) in Fortran, via procedures that are C interoperable. When I was originally thinking about this, I was in the mindset of having an external code create all these complicated data structures which would need to be passed to Fortran, but as long as you have OpenMC handle the complicated data structures in memory and simply provide handles to those data structures via an API, you certainly could make OpenMC behave more library-like.

Regarding ANL/MIT being the primary target, that is true today, but I think it's prudent to assume that that may not be true in the future, if OpenMC is to have any sort of longevity. I also don't think that the goals of having a user-friendly and developer-friendly code are mutually exclusive, and as such we shouldn't view it in that light. The "middle ground" that is suggested is user- and developer-friendly. A user who doesn't care, or doesn't have the patience/knowledge, to link together multiple codes can use the built-in multi-step depletion solver, and an advanced developer can use the public API to do whatever they wish.

wbinventor commented 9 years ago

I've recently been using some interesting algorithms to create complex geometries for OpenMC (in particular, region differentiation in OpenCG) and have come across an issue/question related to this thread. Right now the Python API will allow a user to attach the same Cell to multiple Universes objects in a CSG tree. However, such a CSG tree cannot be adequately represented using the current XML inputs. The reason is that there is not an explicit "universe" XML element, but rather, universes are implicitly defined as an attribute on each cell.

Frankly, I have always thought this to be a bit awkward and wonder if there is a good reason for this that I am unaware of? If not, I would put in my two cents here to make a new "universe" XML element which includes an attribute explicitly enumerating its cell IDs. In this case, the "cell" XML elements would no longer need to implicitly define universes with a "universe" attribute.

I'm sure there may be a good reason to define cells/universes using the current scheme that I am overlooking. But I will list a few reasons that I can see to move to the standard proposed above:

Some potential disadvantages of moving to explicit universes might include the following:

paulromano commented 9 years ago

@wbinventor The current convention was adopted following in others footsteps (MCNP and Serpent namely). I agree that having a <universe> tag is more explicit and as you point out gives more flexibility. I can't think of anything that would immediately go wrong if we were to allow cells to exist on multiple universes. find_cell should still work fine. I was thinking maybe neighbor searches would go awry, but looking at it again I think it should also work fine. There is already a guarantee in find_cell that if a list of cells to search is passed, only those in the same universe as the particle should be used. I'd have to think about distribcell's more.. not immediately obvious to me if it would require changes or not.

Do you want to go ahead and propose a syntax? And how do you want to handle the root universe? For the sake of simplicity, it might be good to assign cells to the root universe if they exist nowhere else so that a user can create a geometry without necessarily having to worry about universes.

wbinventor commented 9 years ago

@paulromano thanks for the feedback on the explicit universe idea. I thought that it might have simply been a legacy implementation based on MCNP/Serpent.

I've always wondered if there was a good reason for defining cells / universes in this way since it seems so counter-intuitive to me. The ray tracing schemes all traverse the CSG tree with universes pointing to cells below them in the tree, yet the inputs are defined with cells pointing to universes higher in the tree. I don't want to be so presumptuous as to say that this a poor design of an input standard for a tree data structure, so I'll just repeat that perhaps there is a good reason for this that I have been overlooking all along :-)

Since it seems that both of us are overlooking any potential good reasons to stick to implicit universes, here is a rough proposal for a new input standard. The old (current) XML "examples/xml/lattice/simple/geometry.xml" example input file defines cells and universes as the following:

  <cell id="1" fill="5" region="1 -2 3 -4" />
  <cell id="101" universe="1" material="1" region="-5" />
  <cell id="102" universe="1" material="2" region="5" />
  <cell id="201" universe="2" material="1" region="-6" />
  <cell id="202" universe="2" material="2" region="6" />
  <cell id="301" universe="3" material="1" region="-7" />
  <cell id="302" universe="3" material="2" region="7" />

I propose that we should remove the "universe" attribute from the "cell" element and instead create a new "universe" element with a "cells" attribute as follows:

  <cell id="1" fill="5" region="1 -2 3 -4" />
  <cell id="101" material="1" region="-5" />
  <cell id="102" material="2" region="5" />
  <cell id="201" material="1" region="-6" />
  <cell id="202" material="2" region="6" />
  <cell id="301" material="1" region="-7" />
  <cell id="302" material="2" region="7" />

  <universe id="1" cells="101 102"/>
  <universe id="2" cells="201 202"/>
  <universe id="3" cells="301 302"/>

Alternatively, I personally would like something (slightly) which retains a bit more of the hierarchical structure of the underlying tree like this

  <cell id="1" fill="5" region="1 -2 3 -4" />

  <universe id="1">
      <cell id="101" material="1" region="-5" />
      <cell id="102" material="2" region="5" />
  </universe>

  <universe id="2"/>
      <cell id="201" material="1" region="-6" />
      <cell id="202" material="2" region="6" />
  </universe>

  <universe id="3"/>
      <cell id="301" material="1" region="-7" />
      <cell id="302" material="2" region="7" />
  </universe>

As for the root universe, I personally like inputs that are explicit (at the expense of some additional verbosity) and hence transparent. However, I can see a new standard where the "geometry" XML element effectively acts as a proxy for the root universe (i.e., the only thing the "Geometry" class points to in OpenMC/MOC/CG is the root universe) such that the input parser assigns cells without an explicit universe to the root universe.

The Python API would require only a very trivial few-line change to accommodate any new standard we adopt. I do think that the distribcell algorithm would require some changes based on my review of the algorithm. But I don't think it would be too difficult; in particular, I think the offset array would simply need to be moved from cells to universes in "geometry.F90" (which is similarly the case in OpenMOC). Although I'm not confident enough to be held to this initial assessment, I can say that if anyone is willing to move the input parser / ray tracing to use explicit universes, I will pick up the slack and make distribcells work for the new standard.

paulromano commented 9 years ago

I also thought about the second example you have (<cell> nested under <universe> explicitly); how would you handle cells that are in multiple universes though?

wbinventor commented 9 years ago

Haha, oops, you caught me dreaming ideas that simply won't work :-) Alright, I'll stand by my first proposal but not the second. To be frank, I'm personally much less interested in how the XML looks (my second proposal wins that category I think) than in the flexibility and correctness of the data structures it defines.

wbinventor commented 9 years ago

@paulromano what do you think of the first option I proposed? How difficult do you think it would be to implement this in the XML input interface, as well as in the ray tracing code?

paulromano commented 9 years ago

I don't think it would be very difficult. I could probably take a stab at it in the next week or two and see how it looks.

wbinventor commented 9 years ago

Although this issue is a bit of a bottleneck on my current work, I'm a bit wary about wading too deep into input / ray tracing code myself just yet. So if you can find the time to look into this, it would be extremely helpful - I'll definitely owe you a few beers when you visit Boston (or if not) :-) As mentioned before, I can commit to migrating the Python API and distribcell algorithm over to any new standard we define, since all of these are integral to my research (and increasingly for others, at least in our group).

wbinventor commented 9 years ago

@paulromano - I was just curious if you still think the "shared cell-per-universe" idea I proposed a few weeks ago is a reasonable one. I understand if you don't have time in the near-term to implement this, . But knowing whether or not it will happen at some point affects the strategy I must take for my current work with OpenCG / OpenMOC and large-scale heterogeneous MGXS libraries moving forward.

paulromano commented 9 years ago

@wbinventor Sorry for delay on this -- I do still think it is a reasonable, if not good, idea. I did start taking a stab at implementing but it turned out to be not quite as simple as I was expecting, and then I got busy with other things. Once 0.7.1 is out the door and I find some time, I'll see what I can do.

wbinventor commented 9 years ago

No problem @paulromano, I def understand it is a back-burner kind of issue. Let's discuss when you visit this week so you understand my motivation for the change - which may not be a good enough reason to make the switch!

wbinventor commented 8 years ago

Any thoughts on making the "summary.h5" output by default? I recently had a conversation related to this with @bforget. In particular, he had an issue querying the StatePoint for tallies by string name. This is only possible if one first "links" the Summary with the StatePoint since all string names are stored only in the "summary.h5" file rather than redundantly in the "statepoint.*.h5" files. Although all of our examples use the Summary, it isn't explicitly stated that one must use it in order to take advantage of the string name. Since the "summary.h5" files are so small, and since HDF5 is now a required dependency, I'd personally opt to make "summary.h5" a default output from a simulation rather than an optional one in order to avoid confusion like this in the future.

paulromano commented 8 years ago

@wbinventor @bforget :+1: for summary.h5 by default. I was actually going to suggest that myself.

nelsonag commented 8 years ago

@paulroman, @wbinventor, @bforget, what about putting the summary.h5 information in the statepoint file itself?

On Wed, Feb 17, 2016 at 9:19 AM, Paul Romano notifications@github.com wrote:

@wbinventor https://github.com/wbinventor @bforget https://github.com/bforget [image: :+1:] for summary.h5 by default. I was actually going to suggest that myself.

— Reply to this email directly or view it on GitHub https://github.com/mit-crpg/openmc/issues/458#issuecomment-185224916.

paulromano commented 8 years ago

My inclination is to leave summary.h5 separate since it contains a lot of metadata that doesn't really need to be written for every statepoint, e.g. complete description of geometry. Also I'm not opposed to renaming summary.h5 to something else if any of you have better suggestions.

wbinventor commented 8 years ago

@nelsonag that's how it all began! Or at least 1.5 years ago I stuck all of the metadata needed to recreate the XML input files in the statepoint file (but actually to be used for data processing with the Python API). Then I realized that much of this was already in the "summary.h5" file. During the process of merging in the initial Python API I moved all of this metadata to the "summary.h5" files to eliminate redundant storage.

But...I can see your point. My argument at the time was to keep this info in the statepoint files since it is so small in comparison to any interesting tally datasets. But for small problems with lots of statepoint files it is indeed quite redundant.

nelsonag commented 8 years ago

Got it, thanks. Reasonable decision to make.

On Wed, Feb 17, 2016 at 10:31 AM, Will Boyd notifications@github.com wrote:

@nelsonag https://github.com/nelsonag that's how it all began! Or at least 1.5 years ago I stuck all of the needed to recreate the XML input files in the statepoint file (but actually to be used for data processing with the Python API). Then I realized that much of this was already in the "summary.h5" file. During the process of merging in the initial Python API I moved all of this metadata to the "summary.h5" files to eliminate redundant storage.

But...I can see your point. My argument at the time was to keep this info in the statepoint files since it is so small in comparison to any interesting tally datasets. But for small problems with lots of statepoint files it is indeed quite redundant.

— Reply to this email directly or view it on GitHub https://github.com/mit-crpg/openmc/issues/458#issuecomment-185255757.

paulromano commented 8 years ago

Just thinking out loud here -- another option might be to embed the XML input in statepoint files which would be much less overhead than explicitly creating a set of nested groups/datasets.

wbinventor commented 8 years ago

Then we would need a way to rebuild the model in Python from XML. On Feb 17, 2016 11:20 AM, "Paul Romano" notifications@github.com wrote:

Just thinking out loud here -- another option might be to embed the XML input in statepoint files which would be much less overhead than explicitly creating a set of nested groups/datasets.

— Reply to this email directly or view it on GitHub https://github.com/mit-crpg/openmc/issues/458#issuecomment-185280621.

paulromano commented 5 years ago

Most of these have been handled in the almost 4 years since this issue was originally created.