Open ChasingNeutrons opened 6 months ago
This looks like a good idea to me. So you are envisioning two PPs, one for eigenvalue and one for fixed source like in MC? Or maybe a noisePP will be necessary too?
On the nuclear data, you don't look up xss from the mgDatabase for vectorisation I imagine? It would be simple to add those arrays to the current mgNuclearData, but maybe it's better to keep things separate for now.
What about the cell remapping, where would that go?
Yes, three PPs if we include noise.
Exactly: the data structure for ND at the moment isn't very friendly for speed/getting the compiler to understand that certain loops are vectorised. For me, it feels like a very involved thing to do for nuclear data when it's meant for MC and that otherwise wouldn't be required, but it is doable. At the moment, data handling for RR is ~30 lines (maybe a few more with kinetic parameters). But I suppose the actual hard bit is how it will look with multiphysics going on... I suppose we won't have a final solution for that in this particular change. Anyway, the point of this is only to say I don't have any good or firm ideas!
Cell remapping is a good question. One thing with it is it's probably not something we want in the long term either - there are more efficient ways of throwing a mesh on top of the geometry which will hopefully have a lot less complication (though it involves doing another ray trace, but it should be fairly cheap). John has done some work on that already and it seems a sensible lead to follow. My thinking is to make these changes for what we have now, without cell remapping, followed by adding in that alternative once the basics are working.
The current PR on random ray is dead for good reason - it doesn't look like SCONEy code and it isn't easy to extend to the many variations on RR we want to make.
I'd like to have some discussion on how a generic RR code that can handle all these variations can look. These variations include eigenvalue/fixed-source/noise and flat source/linear source and isotropic/anisotropic.
How a given RR solver looks is as follows:
I have a handful of ideas for how to make this a bit more general. I would appreciate any feedback on this.
First, we make source and fluxes objects. By default these would contain the current flux, previous flux, accumulated flux (for the flux object) and the current source for the source object. However, these can allocate additional components as necessary, such as linear components, fixed sources, etc. A given physics package will tell these objects to allocate the necessary components. These can have procedures for accumulation, finalisation, and output. We should probably do similar for geometry estimates. For example, this would contain the estimated volume (which will be common to everything) but could optionally allocate spatial moment matrices also. Data should also be packaged more neatly... However, it will essentially need to be a flat array of each XS in the end. The question is whether to do this internally in nuclear data or whether to make a separate object that is constructed from nuclear data.
Second, we make the transport sweep and source update both into objects. Each of these varies dramatically, depending on the calculation performed, but there will be some overlap (flat isotropic versions will be identical for fixed source and eigenvalue). Additionally, the call to movement and its settings will be the same across all physics packages. Each of these objects would be fed pointers to the relevant flux/source/data objects. Likewise they can be constructed depending on the exact scheme being used. For initialising rays, we could reuse the standard fixed source particle sampling with uniformity in space and angle? I think that's what OpenMC have gone for.
Third, we can define a convergence metric object. This could be more or less sophisticated, e.g., Shannon entropy or simply the number of iterations. This is only to allow for code reuse across all the physics packages.
Fourth, it might also make sense to define a k object, since it will be common across eigenvalue and noise. I don't have strong feelings about this - it has relatively little to do other than accumulate flux estimates, which is only a few lines.
Thoughts? I don't think this should be so hard to do and I suspect this will preserve performance while making things tidier and more easily modifiable in future.