CEMeNT-PSAAP / MCDC

MC/DC: Monte Carlo Dynamic Code
https://mcdc.readthedocs.io/en/latest/
BSD 3-Clause "New" or "Revised" License
20 stars 20 forks source link

Decomposed mesh tallies when domain decomposition is active #212

Closed alexandermote closed 3 weeks ago

alexandermote commented 1 month ago

Revised mesh tally decomposition after a meeting with @ilhamv:

ilhamv commented 1 month ago

Thanks, @alexandermote !

The MPI DD tests crashed. But all others passed. Which is a good sign. My recommendation: (1) Identify the issue/bug in the non-Numba MPI test by manually running it with several ranks (the test uses 4 ranks). (2) Then move on to the Numba MPI test. And don't forget back in black (eg, run black *py on MCDC/mcdc).

alexandermote commented 3 weeks ago

I made several changes with this update:

This should allow at least the non-Numba tests to pass. However, it only works when there is only one processor assigned to each subdomain. I will need to add some version of MPI.Reduce among only processors in the same subdomain in order for it to work when multiple processors are assigned to any given subdomain. Also, if there is a better spot for the mesh tally reassembly to take place, I'm happy to move it. Because of the way the tally data is packed, I can't replace the decomposed tally with the reassembled one because it's a different size. The only solution I could think of for this was to reassemble the tally inside the generate_hdf5 function and use it there.