Closed liangjg closed 5 years ago
@liangjg The description of GAMINR makes it sound like it's only for generating multigroup data. Can you really run it to produce a continuous energy KERMA? Reading through the technical details, determining KERMA for photons looks like it would be easy were it not for Compton scattering.
A few more more stream-of-consciousness thoughts:
from_hdf5
additions are great -- thanks for doing that! That did cross my mind at some point. If you want to separate those out into a smaller PR, we can get that merged in sooner. The energy deposition part of this might take more iteration.Could one not also say the same of neutrons? Calculate the recoil energy from conservation of energy & momentum during scatter/inelastic scatter deposit the nuclear recoil energy locally, n,abs gives the Q value locally, similarly extending to photons as you say, difference beween photon energy before and after TTB will give the electron energy directly deposited, etc etc. This could be accrued on the particle, chunks, total, neutron and photon heat members on the class, and reset every time a surface is crossed? Or is that totally mad?
Hah, I was just thinking about you @makeclean and that you might have an opinion on all this. Yes, I think you're absolutely right that in principle, one could do all this madness for neutrons too. I think the reason that people don't is because NJOY/HEATR exists and it does all the conservation of p/E calculations for you. There's also the argument to be made that you get better statistics by precalculating a kerma value as opposed to doing everything in an analog fashion at run-time. Also, what happens to energy from absorption reactions when you are using survival biasing? I'm sure you could do something, but it gets complicated. I do like the simplicity of kermas, even if their use is not microscopically correct (there are plenty of other things we do that are not microscopically correct either).
@paulromano
@liangjg, I took a look at the changes to tally_scoring.cpp and they look like a good addition. I can't comment much on the KERMA vs. KERMA-free debate because I'm not familiar enough with the physics. I vaguely recall that KERMA-free approaches for neutrons become really difficult when you consider Doppler broadening and target motion, but I imagine that's not an issue with photons.
An update on this:
Note the energy of secondary fluorescent photons is accounted for in photoelectric heating (this is not shown in NJOY's manual and that is why I couldn't get the KERMA correct in all range in the beginning). Below are comparisons with MCNP photon library.
For the discrepancy peaks in heavy nuclides, I found it is not caused by the KERMA calculation but by the discrepancy of the point-wise cross section for photoelectric reaction.
A concern of the implementation is, the integral calculations in Compton heatings are a bit time-consuming. Currently, to generate a photon library from ENDF for 100 elements, it will need about 1.5 hours. But this time can be shortened easily using multi-processing.
p->event_nuclide
to be the nuclide instead of the element.
cell | particle | nuclide | estimator | ENDF Photon KERMA | MCNP Photon KERMA |
---|---|---|---|---|---|
fuel | neutron | all | analog | 8.241E+07 | 8.241E+00 |
tracklength | 8.238E+07 | 8.238E+07 | |||
U235 | analog | 7.702E+07 | 7.702E+07 | ||
tracklength | 7.696E+07 | 7.696E+07 | |||
photon | all | analog (KERMA-free) | 4.824E+06 | 4.824E+06 | |
tracklength | 4.833E+06 | 4.872E+06 | |||
U235 | analog (KERMA-free) | 1.420E+05 | 1.420E+05 | ||
tracklength | 1.425E+05 | 1.438E+05 |
The neutron results are put here just for reference. It can be seen:
@paulromano I haven't updated the tests yet but I think I need you to help generate a new test library and upload to the box?
I included the photon heating tally in the regression test and updated the reference results using the new photon KERMA. However, since the nuclear data library used in the tests does not contain photon heating cross sections, it is anticipated that the Travis CI is not going to pass. @paulromano feel free to take a look at this PR when you have time.
Thanks @liangjg. I'll try to take another look today.
@paulromano The main reason I tried to implement the KERMA approach for photon is that the neutron heating was tallied with KERMA factors and it was only available with tracklength estimator. So it was not possible if a user wanted to tally total heating for both photon and neutron in one tally since they have different estimators. Now in this PR, all estimators are supported for both neutron and photon heating. If we exclude the KERMA approach for photon, we can still do total nuclear heating tally, but this tally must use an analog estimator. As for the performance and statistics, the comparison of the two approaches should depend on how we define the metrics, the following are results tested on an assembly case.
To tally neutron and photon heating in all fuel pins
Calculation Rate (inactive) = 19279.1 particles/second
Calculation Rate (active) = 16239.5 particles/second
Calculation Rate (inactive) = 19809.1 particles/second
Calculation Rate (active) = 17678.8 particles/second
Cell 11
Particle: neutron
U234
Heating 6860.76 +/- 3.14085
U235
Heating 7.70068e+07 +/- 46582
U236
Heating 2405.98 +/- 3.2944
U238
Heating 5.30549e+06 +/- 3879.47
O16
Heating 100100 +/- 49.0377
Particle: photon
U234
Heating 1213.26 +/- 0.44346
U235
Heating 142398 +/- 52.048
U236
Heating 654.078 +/- 0.239073
U238
Heating 4.39301e+06 +/- 1605.7
O16
Heating 293008 +/- 123.398
Cell 11
Particle: neutron
U234
Heating 6873.36 +/- 194.679
U235
Heating 7.69744e+07 +/- 33395.4
U236
Heating 2558.34 +/- 69.8584
U238
Heating 5.30498e+06 +/- 3881.98
O16
Heating 100075 +/- 94.4835
Particle: photon
U234
Heating 1233.84 +/- 11.3255
U235
Heating 141705 +/- 180.761
U236
Heating 656.371 +/- 11.6816
U238
Heating 4.37803e+06 +/- 1534.95
O16
Heating 289175 +/- 301.242
@liangjg I found that scipy.integrate.quad
is quite slow compared to some other options. When I change the integration to use a fixed-tolerance Gaussian quadrature (scipy.integrate.quadrature
), I get very good accuracy compared to quad
in about 1/10th of the time. Processing all the photoatomic/atomic relaxation files from ENDF/B-VII.1 takes less than 3 minutes now on my laptop. I'll submit a PR to your branch for you to consider.
@paulromano That's awesome!
@paulromano I just noticed the scipy.integrate.quadrature
has two options to specify the tolerance, absolute one tol
and relative one rtol
, the integration calculation stops when either tolerance is satisfied. For our case, the default absolute tolerance (1.49e-08) is so easy to be met and omitting it will cause incorrect results. I tested if I set the absolute tolerance to be 0.0, the quadrature
is no longer faster than quad
. I'm reverting to use the quad
and will try other options to see if I can find a faster integration function.
@paulromano After doing some profiling and optimization, I was able to speed up the calculation of photon KERMA, making the processing of a whole library less than 10 mins.
I found most of the time in integration calculation is actually spent in millions of scattering factor calculation, which is an interpolation function for one scalar input. However, The __call__
function in Tabulated1D
is coded for array input. I added a new interpolation function for scalar input and it shows a 10+ speedup for a big loop of scalar inputs compared to the original one (see the testing below). I didn't replace the original function with the vectorized scalar function since testing shows the original function still advantageous when the input is an array.
Thanks for the continued work on this @liangjg. Sorry for misleading you with quadrature
-- I did notice that it seemed to converge pretty quickly which should have been a red flag. I'll go ahead and generate a library with the photon heating data.
@liangjg here's a link for new test data: https://anl.box.com/shared/static/u1g3n8iai0u1n5f6ev3pg2j3ff941bqa.xz
@paulromano it seems the wmp
data is missing in the new test library.
Sorry about that! Regenerating now with WMP this time...
@liangjg Ok, just updated the data. The URL is the same, so if you clear the cache on Travis on restart the job, it should pass this time.
@paulromano thanks. Now it's ready for more review.
Thanks for the new feature @liangjg!
Following #1191 which implemented neutron heating tally, this PR attempts to implement the photon heating tally so to enable calculating the neutron-photon coupled energy deposition in the reactor, using heating cross sections.
Heating tally
Since we have a tally filter for particle type, it would be good to use the existing score-type 'heating' for both neutron and photon heating. However, unlike neutron heating can be simply implemented as the tally of mt 301 reaction (this is not exactly true when it comes to analog estimator), photon heating needs to be treated separately. So a new
SCORE_HEATING
case is added. It checks the current particle type and calculates neutron or photon heating scores respectively.The heating cross sections are special as they are total cross sections multiplied by energy release. For analog estimator tally cases, all reaction events should contribute to a heating rate bin like the total rate, so I use the macro heating xs and divide the weight by total xs in the scoring. The neutron heating step is separated from the default MT case as well because the analog treatment is different.
A question here is, for normal reaction tallies, currently we always try to calculate the MT xs from neutron cross section data, even when the photon particle type filter is turned on. For example, OpenMC can give the tally results like "Particle: photon, (n, gamma) reaction rate". I'm not sure if it is needed to do a particle type check before each of the tally scoring so to make sure the abnormal tally results are zero or just leave it and let user interpret the results by themselves. But if we want to add other photon reaction tallies in the future, this will be an issue.
Photon library
Photon heating cross section is included in the photon library as well as the data structure. Some changes were made on the photon-related Python API to better treat photon data:
Photon heating cross section
ACE library contains the photon heating cross section and it can be produced by NJOY's GAMINR module. However, our current photon library is directly generated from ENDF, which does not have heating cross sections. So to get the photon heating number, we need to either generate photon library from ACE (using NJOY, this is what we do for neutron data) or implement our own way to produce the heating number from ENDF.
Currently, to test the photon heating tally capability, I used a mixed approach: generating the photon library from ENDF and borrowing the heating data from ACE. The preliminary results (see below) agree well with MCNP.
Preliminary reseults
As there are issues to be solved, I'm submitting this PR as draft. But I think @openmc-dev/committers , especially @paulromano , @smharper may want to take a look and give some suggestions/comments before I move on. Thanks.
This is supposed to close #1196 .