mit-crpg / opendeplete

A depletion framework for OpenMC
MIT License
15 stars 5 forks source link

Index out of bounds issue #23

Closed wbinventor closed 7 years ago

wbinventor commented 7 years ago

The following error occurs when I run OpenDeplete for the "assembly" problem in the mit-crpg/ecp-benchmarks repo (for my "depleted-nuclide" branch):

...
 Total time for finalization       =  2.6817E+00 seconds
 Total time elapsed                =  1.0195E+02 seconds
 Calculation Rate (inactive)       =  21777.8 neutrons/second
 Calculation Rate (active)         =  6542.51 neutrons/second

 ============================>     RESULTS     <============================

 k-effective (Collision)     =  0.98788 +/-  0.00221
 k-effective (Track-length)  =  0.98876 +/-  0.00204
 k-effective (Absorption)    =  0.98697 +/-  0.00152
 Combined k-effective        =  0.98724 +/-  0.00178
 Leakage Fraction            =  0.00000 +/-  0.00000

Time to openmc:  102.95200157165527
Time to unpack:  385.85221791267395
Time to matrix:  10.688746452331543
Traceback (most recent call last):
  File "build-depleted.py", line 140, in <module>
    opendeplete.integrate(op, opendeplete.ce_cm_c1)
  File "/home/boydwill/miniconda3/lib/python3.5/site-packages/opendeplete-0.1-py3.5.egg/opendeplete/integrator.py", line 139, in integrate
  File "/home/boydwill/miniconda3/lib/python3.5/site-packages/opendeplete-0.1-py3.5.egg/opendeplete/integrator.py", line 292, in compute_results
  File "/home/boydwill/miniconda3/lib/python3.5/site-packages/opendeplete-0.1-py3.5.egg/opendeplete/results.py", line 133, in __setitem__
IndexError: index 264 is out of bounds for axis 0 with size 264

The "build-depleted.py" script generates 264 distribmats for the 264 fuel pins in the assembly, and is designed to run for two depletion time steps. This error occurs on the third OpenMC simulation.

wbinventor commented 7 years ago

So I've confirmed that this issue occurs for each model in mit-crpg/ecp-benchmarks except for the single fuel pin cell. For example, the "2x2-periodic" and "2x2-reflector" models error out with the following:

Traceback (most recent call last):
  File "build-depleted.py", line 141, in <module>
    opendeplete.integrate(op, opendeplete.ce_cm_c1)
  File "/home/boydwill/miniconda3/lib/python3.5/site-packages/opendeplete-0.1-p\
y3.5.egg/opendeplete/integrator.py", line 139, in integrate
  File "/home/boydwill/miniconda3/lib/python3.5/site-packages/opendeplete-0.1-p\
y3.5.egg/opendeplete/integrator.py", line 292, in compute_results
  File "/home/boydwill/miniconda3/lib/python3.5/site-packages/opendeplete-0.1-p\
y3.5.egg/opendeplete/results.py", line 133, in __setitem__
IndexError: index 1056 is out of bounds for axis 0 with size 1056

There are a total of 1056 fuel pins in both problems. Since this is a problem for all benchmarks, perhaps there I didn't properly set up the distribmats in my ecp-benchmarks, and didn't now since it takes a non-negligible amount of time to run each one.

cjosey commented 7 years ago

I'll see what I can do about this this evening.

While I'm at it, I think I'll do what I can to address the second part of:

Time to openmc:  102.95200157165527
Time to unpack:  385.85221791267395

I recently looked at the OpenMC data structures and my data structures, and it appears that I could rewrite it in a way that should be far faster if I skip over the python API.

paulromano commented 7 years ago

@cjosey That thought crossed my mind before; good idea.

wbinventor commented 7 years ago

No rush, I'm just taking note of issues as they arise. Of course speeding up the unpacking would be nice. The ration between OpenDeplete and OpenMC runtimes is reduced with problem complexity (number of distribmats) though. There appears to be a bottleneck in OpenMC when reading in the cross section data with lots of distribmats. At least in the case of the "2x2-periodic" and "2x2-reflector" models, OpenMC spends twice as much time in problem initialization (10 mins) as simulation (4 mins) on 48 cores. So I'm not too concerned about OpenDeplete's runtime, though of course every optimization is welcome :-)