mit-crpg / opendeplete

A depletion framework for OpenMC
MIT License
14 stars 5 forks source link

Negative nuclide number densities #22

Open wbinventor opened 7 years ago

wbinventor commented 7 years ago

With my recent use of OpenDeplete, I've noticed that I often run into situations with negative number densities as reported by warning messages like the following:

WARNING! Negative Number Densities!
Material id =  61  nuclide  N13  number =  -0.0351477527858

This seems to happen most frequently with N13 Sb117 and Ag110m.

I'm not sure what might cause this - perhaps I'm not running enough histories to sufficiently reduce the uncertainties on all tallies? Or is there something to truly be concerned about?

cjosey commented 7 years ago

At least this one I have some direct understanding of the cause. The matrix exponentiation we use, CRAM, is very slightly oscillatory about zero for extremely large eigenvalues.

If you replace CRAM48 over dt with n steps of CRAM48 over dt/n each, you end up with a better method (http://dx.doi.org/10.13182/NSE15-67). When I did this, the magnitude by which number densities went negative decreased, up until a limit of numerical stability. As such, this error is (likely) purely numerical and unavoidable using the methods we currently use.

The upside of this is that even with the method we're currently using, that magnitude is very small. The value output is in units of atoms. So, there are -0.03 atoms of N13 in the entire cell. I would consider this sufficiently negligible to ignore. The worst negative value I've seen is -153 atoms.

I think I'll have to determine a threshold between which a number is "ok and probably due to numerics" and "completely wrong" (as in the case of unstable high order methods), and only output said error when it is in the latter category.

wbinventor commented 7 years ago

Thanks for the clarification @cjosey, I'm glad to hear that this is not an issue to be overly concerned by. I'll leave this issue open for now in case you ever want to determine a threshold to throttle these warning messages.

cjosey commented 7 years ago

Rather humorously, I think I might have a more general solution to this. I was exploring whether or not I could implement CRAM with operators instead of matrices. This would further reduce RAM usage by a factor of 10.

As a consequence, I will eventually no longer be able to use spsolve with CRAM, so I was testing inverse-diagonal preconditioned LGMRES. Performance is comparable when operating on matrices (70% slower but I think I can optimize it more). More interestingly, at no point does the LGMRES version output a negative value.

There is a problem though. The maximum relative error between the methods is larger than 100. The particular case is

n_LGMRES = 0.00023972358355354759
n_spsolve = -0.03388601482405415

So, I'm not yet sure if it's really considered "wrong".

bforget commented 7 years ago

Can you compare with using multiple CRAM steps to see if it would converge to the same value? However, this just seems like a new way of calculating 0, so probably nothing to worry about.