Open paulromano opened 7 years ago
@cjosey Are you actively working on this? If not, I can take a stab at it.
All I have done is some architectural analysis. The only thing I thought of doing was replace a lot of the arrays in DepletionChain
with sparse matrices. When OpenDeplete is first loaded, it could construct a lambda
matrix (decay chain components) and a Q
matrix (energy deposition). Then, reaction rates are unloaded into a sparse R
matrix mapping reaction of nuclide 1 -> nuclide 2. These matrices would be structurally constant, which could be leveraged for a performance improvement.
The whole point of this modification is that currently, reactions are stored 2D in a (nuclide, reaction) array. If we start adding (n,gamma->m1) and (n,gamma->m2) and all sorts of esoteric reactions, this structure becomes less efficient as a lot of nuclides do not have these reactions. I'm not sure what the cutoff is on when it would become less efficient than CSR/CSC though.
The other advantage is in construction of the A
matrix and cheap handling of power normalization:
power_decay = sum( (matrix_Q_decay .* matrix_lambda) * nuclide_vector_y)
power_reactor = sum( (matrix_Q_rxn .* matrix_R) * nuclide_vector_y)
matrix_R *= (power_true - power_decay) / power_reactor
matrix_A = lambda + matrix_R
I think. Construction of the Q
matrix would need to be done carefully so we don't forget / double count terms.
But that's about as far as I got. I've been focussing on the MPI stuff, which is just about done.
That's an interesting idea. Would those Q
matrices really just be vectors?
I would propose that we ignore the energy-dependence of branching ratios for now. This would simplify things a lot because then you'd never need to collect separate reaction rates from OpenMC for different branches of the same reaction. The only change needed would be to support multiple targets for a given reaction_type
and then to use this information when construction the burnup matrix.
They would not. The Q
matrix would contain the heat created via reaction/decay of nuclide a -> nuclide b and would be double-indexed. The matrix form above is just:
So long as the diagonals are zero.
I am reluctant to go the route of ignoring the energy dependence. The Serpent documentation claims that the isomeric branching ratios are a "one of the major sources of uncertainty". I would be fine using single values, if there were any chance they were less uncertain (for example, 8 group PKE data vs full decay chain). However, their data here is computed by just integrating a PWR spectrum over the JEFF-3.1 data anyways. So it's as uncertain as said data with the added disadvantage of being spectrum-dependent and from a different library.
Personally, I would like to avoid any spectrum-dependent data, as at that point I might as well just use MPACT/CASMO for an orders of magnitude speedup. The only place we can't get away from it is the fission yield.
Ah, ok. For decay, we generally don't have mode-dependent energy release though, i.e., for each decay mode in a given nuclide, Q would be the same.
I agree in principle that we should avoid spectrum-dependent data, and I also wouldn't argue against the statement that isomeric branching ratios could be a major source of uncertainty. However, using energy-dependent branching ratios doesn't really get rid of that uncertainty. The major source of differences you would see in a calculation is really the fact that the libraries themselves can have substantially different values for branching ratios, rather than the fact that you have or haven't accounted for the spatial- and burnup-dependence of the flux spectrum itself. A really good example of this can be seen in Figs. 8 and 9 of a paper by Wim Haeck. For a given library, you see a slight difference in branching ratios in different spectral conditions (in Fig 8, the ground BR for Am241->Am242 is 87.2 for UO2 vs 86.6 for MOX) and a very slight dependence on burnup as the spectrum changes over time (~87.2 for UO2 at BOC vs 87.4 at EOC), but between libraries, there is a much bigger difference (87.2 for UO2 with JEFF 3.1 vs 89.9 with ENDF/B-VII.1).
I'd be all for energy-dependent BRs if they were easy, but they would complicate things. For one, it would mean that we would either have to tally the reaction rate in multiple energy groups (not clear how many is necessary) and then collapse that with the energy-dependent BRs from OpenDeplete, or OpenMC would have to be aware of those energy-dependent BRs and directly tally the production rate of isomeric states. Serpent can get away with doing energy-dependent BRs because they use the fine-group flux method which makes it easy to collapse the flux spectrum, cross section, and branching ratios all in one shot to get isomeric production rates.
Hmm. I was not aware that the spectral dependence was smaller than the raw inter-library uncertainty. I was going to see if I could find any libraries in which the energy dependency was significant. I saw a whole lot of nonsense, like the 95244 data alternating from 0.062 to 0.063 every point for 10532 points in the JEFF-3.1 data. I'm starting to remember why I want nothing to do with nuclear data.
As a last question, how bad for performance would it be if we used openmc.EnergyFunctionFilter()
? According to the Serpent data, we'd need 101 separate filters.
If that's an insurmountable cost, then go ahead with single values. We can wait until the plans to investigate tally performance go through before it is properly addressed. Just make absolutely certain to thoroughly document where those values come from. You might want to base it off the MPI PR, as I was going to merge that in a day or so.
The decay chain files generated by OpenDeplete do not currently account for capture branching ratios. We need some way of accounting for these branching ratios in order to get correct isotopic vectors in depletion.