openmc-dev / openmc

OpenMC Monte Carlo Code
https://docs.openmc.org
Other
701 stars 450 forks source link

Add support for MCPL files #2116

Closed ebknudsen closed 1 year ago

ebknudsen commented 1 year ago

Please see below for an updated version of this...

This PR adds the capability to OpenMC to read MCPL (Monte Carlo Particle Lists) input files. This was a feature request in the MCPL-project (https://github.com/mctools/mcpl/issues/56).

A new source class MCPLFileSource is created which may be chosen from settings.xml which rather closely follows the logic of FileSource. I've added an option to cmake to turn this feature on (default is off), and a very basic regression test.

Current limitations:

ebknudsen commented 1 year ago

First build failed because settings.cpp also needs a check if MCPL is asked for. I fixed this in commit cd163ef.

ebknudsen commented 1 year ago

Failed due to a missing (empty) init.py file in the regression test directory (tests/regression_tests/source_mcpl_file. The regression automatic test will still fail since the default build is without mcpl support.

ebknudsen commented 1 year ago

This next problem was caused by the regression test script not being able to find the mcpl header file "mcpl.h". Obviously none of the test runners have it installed, so the test is expected to fail but should do so gracefully. eff690f is an attempt to do so.

tkittel commented 1 year ago

Just my 2 cents concerning the MPI issue: I would think it is best is if each process simply uses the mcpl skip ahead functionality to pick out a subset of events. The number of events in the input file should be known, as should presumably the number of processes, so I guess it would be straight forward to assign each event to each process. E.g. if the number of processes is 3 and there are 11 particles in the input:

   particle 1 -> process 1
   particle 2 -> process 2
   particle 3 -> process 3
   particle 4 -> process 1
   particle 5 -> process 2
   particle 6 -> process 3
   particle 7 -> process 1
   particle 8 -> process 2
   particle 9 -> process 3
   particle 10 -> process 1
   particle 11 -> process 2

That is how I do it for Geant4. Also, the various processes would at a given time hopefully be reading events from approximately the same position in the file which I would hope would help with bus congestion issues.

ebknudsen commented 1 year ago

That's a reasonable suggestion @tkittel. Thanks for the 2c.

paulromano commented 1 year ago

@ebknudsen @tkittel Glad to see there have been some CMake-related improvements in MCPL. Once this PR is updated to match those changes (using find_package and such), can you ping me for a another review? Thanks!

ebknudsen commented 1 year ago

Roger that. Wilco

ebknudsen commented 1 year ago

Thanks for the pull request @ebknudsen! Here is a first round of comments.

On the overall approach, I know based on the discussion from mctools/mcpl#56 that there seems to be a strong preference to have the MCPL support implemented in C++ directly. However, it does seem like having an equivalent functionality in Python would be pretty simple. It would look something like:

import mcpl
import openmc

mcpl_file = mcpl.MCPLFile(...)
particles = []
for p in mcpl_file.particles:
    omc_particle = openmc.SourceParticle(
        r=p.position,
        u=p.direction,
        E=p.ekin,
        ...
    )
    particles.append(omc_particle)

openmc.write_source_file(particles, 'source.h5')

Never got around to commenting on this particular item. It would perhaps be that simple except that (at present) the python mcpl module does not support writing.

tkittel commented 1 year ago

FYI I have now opened a PR at https://github.com/conda-forge/staged-recipes/pull/20740 which should make MCPL available on conda as well in the near future, which will hopefully make this openmc-mcpl feature even more accessible :-)

ebknudsen commented 1 year ago

@paulromano, I have now fixed the cmake issue. In addition I've added code to also directly output mcpl-files as an alternative to the regular .h5-ones. Since I did that I also also replaced the regression test (source_mcpl_file) with one that is almost identical to source_file. You'd be most welcome to take a look a what was done. cheers Erik

On Fri, Sep 23, 2022 at 3:06 PM Paul Romano @.***> wrote:

@ebknudsen https://github.com/ebknudsen @tkittel https://github.com/tkittel Glad to see there have been some CMake-related improvements in MCPL. Once this PR is updated to match those changes (using find_package and such), can you ping me for a another review? Thanks!

— Reply to this email directly, view it on GitHub https://github.com/openmc-dev/openmc/pull/2116#issuecomment-1256187955, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGQ7FHLP5WBWRHN36HAOU3V7WTMPANCNFSM53N33P7Q . You are receiving this because you were mentioned.Message ID: @.***>

ebknudsen commented 1 year ago

This PR adds the capability to OpenMC to read and write MCPL (Monte Carlo Particle Lists) event files. This was a feature request in the MCPL-project (https://github.com/mctools/mcpl/issues/56).

Along the lines suggested below, a constructor is added to the FileSource class, to open and read a set of events from an MCPL-file. After that the source class just does as it normally does.

On output, I've added two new functions "write_mcpl_source_point" and "write_mcpl_source_bank" - again closely mimicking "write_source_point" etc.

In the .xml-layer mcpl.input is activated by writing e.g.:

  <source>
    <mcpl> source.10.mcpl </mcpl>
  </source>

mcpl-output is activated by:

  <source_point mcpl="true" separate="true" />

in settings.xml

From python it would be:

src =openmc.Source(file='input.mcpl')

where mcpl is activated by the file-suffix. Output is through a the dictionary assigned to Settings.surf_source_write. Like:

set=openmc.Settings()
set.surf_source_write({'mcpl':True, 'surface_ids':[1 2 4]})

To compile, one adds -DOPENMC_USE_MCPL=on to the cmake call.

Current limitations:

yrrepy commented 1 year ago

Hi all, for the record below is my Python script to convert from MCPL to OpenMC 0.13.2. Perry

import openmc
import mcpl
import h5py
import os
import numpy as np
from enum import Enum

mcplfile = mcpl.MCPLFile("mcpl.gz")

class ParticleType(Enum):
    neutron = 0              #  neutron   (2112 in MCPL)
    photon  = 1              #  photon    (22   in MCPL)
    electron= 3              #  electron  (11 in MCPL)
    positron= 4              #  positron  (-11 in MCPL)

MCparticles = []
for p in mcplfile.particles:
    weight = p.weight        
    energy = p.ekin*1000000  # MCPL MeV to OpenMC eV
    x  = p.x
    y  = p.y
    z  = p.z
    ux = p.ux
    uy = p.uy
    uz = p.uz
    t  = p.time/1000         # MCPL msec to OpenMC sec                  
    pp = p.pdgcode

    if pp == 2112: 
        pp = ParticleType.neutron
    elif pp == 22:
        pp = ParticleType.photon
    elif pp == 11:
        pp = ParticleType.electron
    elif pp == -11:
        pp = ParticleType.positron

    MCp = openmc.SourceParticle(r=(x, y, z), u=(ux, uy, uz), E=energy, time=t, wgt=weight, surf_id=1, particle=pp) 
    MCparticles.append(MCp)

openmc.write_source_file(MCparticles, 'source.h5')

Thanks for the pull request @ebknudsen! Here is a first round of comments. On the overall approach, I know based on the discussion from mctools/mcpl#56 that there seems to be a strong preference to have the MCPL support implemented in C++ directly. However, it does seem like having an equivalent functionality in Python would be pretty simple. It would look something like:

import mcpl
import openmc

mcpl_file = mcpl.MCPLFile(...)
particles = []
for p in mcpl_file.particles:
    omc_particle = openmc.SourceParticle(
        r=p.position,
        u=p.direction,
        E=p.ekin,
        ...
    )
    particles.append(omc_particle)

openmc.write_source_file(particles, 'source.h5')

Never got around to commenting on this particular item. It would perhaps be that simple except that (at present) the python mcpl module does not support writing.

tkittel commented 1 year ago

@paulromano related to "//do checks on the mcpl_file to see if particles are many enough." I was wondering if it is possible for a particle source in OpenMC to not indicate the number of particles at initialisation, but rather indicate "I ran out of particles now" later, after some events? I am asking because such a capability exist in Geant4, and for the MCPL input hook there, I am simply firing off particles (possibly with some user-selected on-the-fly filtering) and then at some point simply saying no-more-particles, and Geant4 is then gracefully able to deal with that in a non-error-condition kind of way.

If such a capability does not exist, then I see no other way than to do a dummy run through the input file at the beginning in order to count how many appropriate particles exists (i.e. with pdgcodes known to openmc + possibly one day some sort of user filtering). It might not be very awful to do so of course, given that running through an mcpl file is likely much much faster than simulating a bunch of particles in openmc. At least if most of the particles will actually be selected for simulation :smile:

ebknudsen commented 1 year ago

@ebknudsen Thanks for all the updates and sorry it has taken me so long to get back to this! Here is another round of comments. In addition to the line comments below, a few other things are needed:

* Right now, the test covers writing an MCPL file but it doesn't cover using one as a starting source. You'll need to either modify the existing test or add an additional one.

* For the MCPL stuff to work in CI, we'll need to have MCPL installed. For now, you can follow the same pattern as in `tools/ci/gha-install.njoy.sh`. That is, add a new file `tools/ci/gha-install-mcpl.sh` that runs git clone, cmake, make, make install, and then call that script from `tools/ci/gha-install.sh`. I think we can just always have MCPL enabled in CI by default (will need to update `tools/ci/gha-install.py`.

Let me know if you need any guidance and looking forward to taking another look!

@paulromano Thanks a lot for the comments and the very thorough review - I'll go through all the pts, one at a time. Wrt. to the 1st. point above - the test does do both, following the same logic as for fileSource. In the first step a file is written using openmc and in the second step the newly created file is read. Granted, the test could go deeper and check for file corruption - instead of just checking that the file was created.

ebknudsen commented 1 year ago

@ebknudsen Thanks for all the updates and sorry it has taken me so long to get back to this! Here is another round of comments. In addition to the line comments below, a few other things are needed:

* Right now, the test covers writing an MCPL file but it doesn't cover using one as a starting source. You'll need to either modify the existing test or add an additional one.

* For the MCPL stuff to work in CI, we'll need to have MCPL installed. For now, you can follow the same pattern as in `tools/ci/gha-install.njoy.sh`. That is, add a new file `tools/ci/gha-install-mcpl.sh` that runs git clone, cmake, make, make install, and then call that script from `tools/ci/gha-install.sh`. I think we can just always have MCPL enabled in CI by default (will need to update `tools/ci/gha-install.py`.

Let me know if you need any guidance and looking forward to taking another look!

OK - I've added a gha-install-mcpl.sh-script it seems to work as expected, and added the necessary lines to tools/ci/gha-install.sh and tools/ci/gha-install.py. It's not on by default though.

shimwell commented 1 year ago

Happy to trigger the CI for this PR, I just noticed there is a minor conflict

<<<<<<< mcpl_input
            if (found = contains(domain_ids_, id))
=======
            if ((found = contains(domain_ids_, id)))
>>>>>>> develop
ebknudsen commented 1 year ago

Just fixed this - thanks for noticing (and pointing it out to me)

shimwell commented 1 year ago

CI is now running :crossed_fingers:

paulromano commented 1 year ago

Thanks again for the contribution @ebknudsen!

ebknudsen commented 1 year ago

Thanks again for the contribution @ebknudsen!

You are most welcome! Likewise -thanks @paulromano for all the help in finishing it up and making it happen. Just glad to contribute.

yrrepy commented 1 year ago

Hi guys, thanks for all of your efforts.

Update: disregard the below, I found the incantation: <source file="XYZ.mcpl"/>

Original Post: I have built the latest version of OpenMC 0.13.3 with MCPL-1.6.1.

I can write surface sources in MCPL format from OpenMC. But I can't decipher the settings.xml incantation for OpenMC to read-in MCPL format surface sources.

I see that mcpl_interface.h exists, so I think this MCPL reading is present.

Any pointers?

yrrepy commented 1 year ago

I thought I might report that MCPL sources called directly are working great.

I have an MPI build that failed to run with an .h5 RSSA (the build didn't like my GNU HDF5 lib) But, that same build was able to still run (in MPI) with the .mcpl RSSA

With appropriately working builds running with many MPI threads ; I am finding that running with .mcpl is usually 1 minute faster than with .h5, and there are some indications that with longer runtime this scales up a bit (e.g.: mcpl being 2 minutes faster than .h5). Of course this is minor compared to the run-time.

But just wanted to report that it is working great, in MPI no less!