openmc-dev / openmc

OpenMC Monte Carlo Code
https://docs.openmc.org
Other
705 stars 450 forks source link

Photon heating tally #1203

Closed liangjg closed 5 years ago

liangjg commented 5 years ago

Following #1191 which implemented neutron heating tally, this PR attempts to implement the photon heating tally so to enable calculating the neutron-photon coupled energy deposition in the reactor, using heating cross sections.

Heating tally

Since we have a tally filter for particle type, it would be good to use the existing score-type 'heating' for both neutron and photon heating. However, unlike neutron heating can be simply implemented as the tally of mt 301 reaction (this is not exactly true when it comes to analog estimator), photon heating needs to be treated separately. So a new SCORE_HEATING case is added. It checks the current particle type and calculates neutron or photon heating scores respectively.

    case SCORE_HEATING:
      if (p->type_ == Particle::Type::neutron) {
      ...
      } else if (p->type_ == Particle::Type::photon) {
      ...
      }

The heating cross sections are special as they are total cross sections multiplied by energy release. For analog estimator tally cases, all reaction events should contribute to a heating rate bin like the total rate, so I use the macro heating xs and divide the weight by total xs in the scoring. The neutron heating step is separated from the default MT case as well because the analog treatment is different.

        if (tally.estimator_ == ESTIMATOR_ANALOG) {
          // Calculate material heating cross section
          double macro_heating = 0.;
          const Material& material {*model::materials[p->material_]};
          ...
          // All events score to an heating rate bin
          if (settings::survival_biasing) {
            // We need to account for the fact that some weight was already
            // absorbed
            score = p->wgt_last_ + p->wgt_absorb_;
          } else {
            score = p->wgt_last_;
          }
          score *= macro_heating * flux / simulation::material_xs.total;
        }

A question here is, for normal reaction tallies, currently we always try to calculate the MT xs from neutron cross section data, even when the photon particle type filter is turned on. For example, OpenMC can give the tally results like "Particle: photon, (n, gamma) reaction rate". I'm not sure if it is needed to do a particle type check before each of the tally scoring so to make sure the abnormal tally results are zero or just leave it and let user interpret the results by themselves. But if we want to add other photon reaction tallies in the future, this will be an issue.

Photon library

Photon heating cross section is included in the photon library as well as the data structure. Some changes were made on the photon-related Python API to better treat photon data:

Photon heating cross section

ACE library contains the photon heating cross section and it can be produced by NJOY's GAMINR module. However, our current photon library is directly generated from ENDF, which does not have heating cross sections. So to get the photon heating number, we need to either generate photon library from ACE (using NJOY, this is what we do for neutron data) or implement our own way to produce the heating number from ENDF.

Currently, to test the photon heating tally capability, I used a mixed approach: generating the photon library from ENDF and borrowing the heating data from ACE. The preliminary results (see below) agree well with MCNP.

Preliminary reseults

openmc(heating tally) mcnp (F6)
fuel neutron 8.25E+07 8.29E+07
photon 4.74E+06 4.86E+06
coolant neutron 1.34E+06 1.34E+06
photon 2.86E+05 2.88E+05

As there are issues to be solved, I'm submitting this PR as draft. But I think @openmc-dev/committers , especially @paulromano , @smharper may want to take a look and give some suggestions/comments before I move on. Thanks.

This is supposed to close #1196 .

paulromano commented 5 years ago

@liangjg The description of GAMINR makes it sound like it's only for generating multigroup data. Can you really run it to produce a continuous energy KERMA? Reading through the technical details, determining KERMA for photons looks like it would be easy were it not for Compton scattering.

paulromano commented 5 years ago

A few more more stream-of-consciousness thoughts:

makeclean commented 5 years ago

Could one not also say the same of neutrons? Calculate the recoil energy from conservation of energy & momentum during scatter/inelastic scatter deposit the nuclear recoil energy locally, n,abs gives the Q value locally, similarly extending to photons as you say, difference beween photon energy before and after TTB will give the electron energy directly deposited, etc etc. This could be accrued on the particle, chunks, total, neutron and photon heat members on the class, and reset every time a surface is crossed? Or is that totally mad?

paulromano commented 5 years ago

Hah, I was just thinking about you @makeclean and that you might have an opinion on all this. Yes, I think you're absolutely right that in principle, one could do all this madness for neutrons too. I think the reason that people don't is because NJOY/HEATR exists and it does all the conservation of p/E calculations for you. There's also the argument to be made that you get better statistics by precalculating a kerma value as opposed to doing everything in an analog fashion at run-time. Also, what happens to energy from absorption reactions when you are using survival biasing? I'm sure you could do something, but it gets complicated. I do like the simplicity of kermas, even if their use is not microscopically correct (there are plenty of other things we do that are not microscopically correct either).

liangjg commented 5 years ago

@paulromano

  1. I'm trying to generate photon KERMA from ENDF by applying the GAMINR formulas on the point-wise data. I found the way to calculate integral for Compton scattering reaction. But currently, I can only get similar KERMA in part of the energy range. Maybe there are other special treatments in NJOY I missed.
  2. The KERMA-free approach is really great. It works well with Monte Carlo as an analog tally. I'd like to try it out soon.
  3. As you suggested, I separated the photon refactoring changes in a new PR, #1205 .
smharper commented 5 years ago

@liangjg, I took a look at the changes to tally_scoring.cpp and they look like a good addition. I can't comment much on the KERMA vs. KERMA-free debate because I'm not familiar enough with the physics. I vaguely recall that KERMA-free approaches for neutrons become really difficult when you consider Doppler broadening and target motion, but I imagine that's not an issue with photons.

liangjg commented 5 years ago

An update on this:

KERMA factor generation

H

O

Al

U

For the discrepancy peaks in heavy nuclides, I found it is not caused by the KERMA calculation but by the discrepancy of the point-wise cross section for photoelectric reaction.

U_PE

A concern of the implementation is, the integral calculations in Compton heatings are a bit time-consuming. Currently, to generate a photon library from ENDF for 100 elements, it will need about 1.5 hours. But this time can be shortened easily using multi-processing.

Heating tally

The neutron results are put here just for reference. It can be seen:

@paulromano I haven't updated the tests yet but I think I need you to help generate a new test library and upload to the box?

liangjg commented 5 years ago

I included the photon heating tally in the regression test and updated the reference results using the new photon KERMA. However, since the nuclear data library used in the tests does not contain photon heating cross sections, it is anticipated that the Travis CI is not going to pass. @paulromano feel free to take a look at this PR when you have time.

paulromano commented 5 years ago

Thanks @liangjg. I'll try to take another look today.

liangjg commented 5 years ago

@paulromano The main reason I tried to implement the KERMA approach for photon is that the neutron heating was tallied with KERMA factors and it was only available with tracklength estimator. So it was not possible if a user wanted to tally total heating for both photon and neutron in one tally since they have different estimators. Now in this PR, all estimators are supported for both neutron and photon heating. If we exclude the KERMA approach for photon, we can still do total nuclear heating tally, but this tally must use an analog estimator. As for the performance and statistics, the comparison of the two approaches should depend on how we define the metrics, the following are results tested on an assembly case.

Performace

To tally neutron and photon heating in all fuel pins

paulromano commented 5 years ago

@liangjg I found that scipy.integrate.quad is quite slow compared to some other options. When I change the integration to use a fixed-tolerance Gaussian quadrature (scipy.integrate.quadrature), I get very good accuracy compared to quad in about 1/10th of the time. Processing all the photoatomic/atomic relaxation files from ENDF/B-VII.1 takes less than 3 minutes now on my laptop. I'll submit a PR to your branch for you to consider.

liangjg commented 5 years ago

@paulromano That's awesome!

liangjg commented 5 years ago

@paulromano I just noticed the scipy.integrate.quadrature has two options to specify the tolerance, absolute one tol and relative one rtol, the integration calculation stops when either tolerance is satisfied. For our case, the default absolute tolerance (1.49e-08) is so easy to be met and omitting it will cause incorrect results. I tested if I set the absolute tolerance to be 0.0, the quadrature is no longer faster than quad. I'm reverting to use the quad and will try other options to see if I can find a faster integration function.

liangjg commented 5 years ago

@paulromano After doing some profiling and optimization, I was able to speed up the calculation of photon KERMA, making the processing of a whole library less than 10 mins. I found most of the time in integration calculation is actually spent in millions of scattering factor calculation, which is an interpolation function for one scalar input. However, The __call__ function in Tabulated1D is coded for array input. I added a new interpolation function for scalar input and it shows a 10+ speedup for a big loop of scalar inputs compared to the original one (see the testing below). I didn't replace the original function with the vectorized scalar function since testing shows the original function still advantageous when the input is an array.

Screen Shot 2019-04-19 at 13 16 47
paulromano commented 5 years ago

Thanks for the continued work on this @liangjg. Sorry for misleading you with quadrature -- I did notice that it seemed to converge pretty quickly which should have been a red flag. I'll go ahead and generate a library with the photon heating data.

paulromano commented 5 years ago

@liangjg here's a link for new test data: https://anl.box.com/shared/static/u1g3n8iai0u1n5f6ev3pg2j3ff941bqa.xz

liangjg commented 5 years ago

@paulromano it seems the wmp data is missing in the new test library.

paulromano commented 5 years ago

Sorry about that! Regenerating now with WMP this time...

paulromano commented 5 years ago

@liangjg Ok, just updated the data. The URL is the same, so if you clear the cache on Travis on restart the job, it should pass this time.

liangjg commented 5 years ago

@paulromano thanks. Now it's ready for more review.

paulromano commented 5 years ago

Thanks for the new feature @liangjg!