openmc-dev / openmc

OpenMC Monte Carlo Code
https://docs.openmc.org
Other
705 stars 450 forks source link

Time Cutoff #2454

Closed cfichtlscherer closed 9 months ago

cfichtlscherer commented 1 year ago

Simulations for tallying events in specific time intervals can be made more efficient using time cutoffs. Particles older than this cutoff value are killed. Keep in mind that the time variable of a particle is inherited.

gridley commented 1 year ago

Does it make sense to have time treated differently among the different particles? What is a scenario where we would want all neutrons past a certain time to die, but continue propagating protons, for example?

One thing on my wishlist is a time boundary. For example, we run particles in a fixed source calculation until some boundary in time is hit, then stop them at that point, potentially putting them into a bank of SourceSites and then handing control back to the user with the python API. It would be useful for changing the geometry or densities in a time-dependent fashion. Would functionality like this accomplish what you're looking for? I think @stchaker was going to start looking at implementing something like this pretty soon.

One last point: particles will actually survive slightly longer than the time cutoff here. You'd have to be checking for the max time while advancing along a track segment to kill them at the exact cutoff time.

cfichtlscherer commented 1 year ago

Thanks for the feedback!

1) That's a good question, and I am not sure. I wanted to make it consistent with the energy cutoffs and thought it was the most flexible. That could make things easier in the future, but we can also make it a fixed variable for all particles.

2) That sounds interesting! So you plan a functionality to modify the model in dependency of the particle time and the goal is to simulate dynamical systems iteratively? Maybe it's easier to structure it more like the depletion code, that you can specify times and densities beforehand, which are then changed?

3) Yes, that's right - I was a bit lazy. I think we could probably introduce the if condition before the particle advanced?

gridley commented 1 year ago

As for why this shouldn't look like depletion, it's because the interesting dynamic reactor problems have multiphysics feedback. You can't know density or temperature over time until you have a history of neutron (and potentially photon)-driven heating. So it would be handy to do the feedbacks through the python API.

Maybe a similar functionality would be useful in depletion, e.g. adjusting for critical boron concentration.

Anyway, IMO a single time cutoff makes the most sense, because the problem is nonphysical if you're artificially killing some specific type of particle. Maybe others will have a differing opinion though.

cfichtlscherer commented 1 year ago

Sure, I suggest we wait a bit again to see if there are any other opinions. Otherwise, we go with the single time cutoff.

gridley commented 1 year ago

Interestingly, it seems that MCNP does have separate time cutoffs for the different particle types, p. 337 describes it.

https://permalink.lanl.gov/object/tr?what=info:lanl-repo/lareport/LA-UR-22-30006

So I guess it does make sense to do this, although I'm not sure what the application would be.

cfichtlscherer commented 1 year ago

Maybe we want to be able to vary the time variable in the simulation at some point? (kinda: https://github.com/openmc-dev/openmc/pull/2355, even if this example won't be part of the code) And, you can already give the particles different time variables in the source.

cfichtlscherer commented 1 year ago

I ran clang-format over it again.

cfichtlscherer commented 1 year ago

@gridley Thanks a lot for your feedback! What do you think about it now?

gridley commented 1 year ago

I'm just going to wait for one more person to give it the OK before merging, Chris!

paulromano commented 1 year ago

I'll give this a look today :smile:

cfichtlscherer commented 1 year ago

Dear @paulromano, thanks for your feedback! Personally, I like option 2 better because I think option 1 requires a lot of unnecessary computing time that we can save (@gridley or do you think option 1 is better?). As I understand it, I would try to implement it with the idea that if the particle has exceeded the time (enters the if condition), I would just push the particle back a bit?

Maybe something like (not tested or tried, just for understanding):

  if (time() > settings::time_cutoff[static_cast<int>(type())]) {
    double time_dif = time() - settings::time_cutoff[static_cast<int>(type())];
    double push_back_distance = this->speed() * time_dif;

    for (int j = 0; j < n_coord(); ++j) {
       coord(j).r -= push_back_distance * coord(j).u;
    }
    wgt() = 0.0;
  }
gridley commented 1 year ago

Maybe this lump of code should go into an inlined function for Particle:

Particle::move_distance(double length) {
    for (int j = 0; j < n_coord(); ++j) {
       coord(j).r += length * coord(j).u;
    }
}

Then we don't have to write out two loops. But aside from that, yeah, looks good to me!

gridley commented 1 year ago

Also, I think that code is going to be prematurely killing the particle. The wgt() would have to be zero'd after doing the tracklength tally on this last little segment, if we are going to be strictly adhering to what Paul suggested. So you'd have to set some bool that's like: hit_time_boundary and then kill the particle after doing tracklength tallying.

cfichtlscherer commented 1 year ago

Yes, that's right! I was still thinking about whether you could do it differently, because you tear the functionality apart a bit. But I think this is the best method.

cfichtlscherer commented 1 year ago

Short intermediate status of the troubleshooting maybe @gridley you have an idea, but don't feel forced at all!

Fast link to the failing test: https://github.com/openmc-dev/openmc/blob/develop/tests/regression_tests/surface_source/test.py

source_true, source_test both ran with 1,000 particles and each of them carries

r, u, E, time, wgt, delayed_group, surf_id and particle (https://docs.openmc.org/en/stable/io_formats/source.html) r and u are carrying 3 entries each, means in total 12 entries.

So in total, I would think source_true and source_test would consist out of 12,000 values each.

But after the datatype is changed:

https://github.com/openmc-dev/openmc/blob/1cb22075efdb90570f233a3d26151f7b763f8529/tests/regression_tests/surface_source/test.py#L80

Printing out the length of source_true returns 13,000 entries. I do not know where the additional 1,000 values are coming from.

Further, when I compare which of the particle entries creates the error, something like (for the time):

    def _compare_output(self):
        """Make sure the current surface_source.h5 agree with the reference."""
        if self._model.settings.surf_source_write:
            with h5py.File("surface_source_true.h5", 'r') as f:
                source_true = f['source_bank'][()]
                #print(source_true)
                true_t = np.asarray([i[4] for i in source_true])
                true_t.dtype = 'float64'
                # Convert dtye from mixed to a float for comparison assertion
                source_true.dtype = 'float64'
            with h5py.File("surface_source.h5", 'r') as f:
                source_test = f['source_bank'][()]
                test_t = np.asarray([i[4] for i in source_test])
                test_t.dtype = 'float64'
                # Convert dtye from mixed to a float for comparison assertion
                source_test.dtype = 'float64'
            np.testing.assert_allclose(np.sort(true_t),
                                       np.sort(test_t),
                                       atol=1e-07)

It passes the test for all the 12 particle entries.

Maybe someone has an idea what's going on.

gridley commented 1 year ago

No clue, sorry dude. Amazing how tough this one has been to crack!

cfichtlscherer commented 1 year ago

No worries. Thanks a lot, anyway! Just to mention again that there is so much of your work in this pull request.

cfichtlscherer commented 1 year ago

To be honest, I think the problem is the test of the surface source. As described in the previous step, I believe that the routine

        if self._model.settings.surf_source_write:
            with h5py.File("surface_source_true.h5", 'r') as f:
                source_true = f['source_bank'][()]
                # Convert dtye from mixed to a float for comparison assertion
                source_true.dtype = 'float64'

does not work as it should and I propose to change the test slightly that the data is read out more explicitly. @gridley, @paulromano What do you think about this change?

gridley commented 1 year ago

What do you think is wrong about how it's being read? It seems like your change would be just sorting the values before comparing, right?

If anything, we should fix the underlying file reader. That would be quite bad to have wrong!

cfichtlscherer commented 1 year ago

Thanks, @gridley!

I am not entirely sure what is going on (probably not a good sign in a pull request ;-) ).

I noticed that the array with all entries of the 1000 particles has too many entries at the end. It should have 12_000, but has 13_000 entries. The additional 1000 entries are causing the test to fail for the time cutoff (I am not sure why and where it could be triggered by the changes coming with the time cutoff).

The error (creation of 1000 additional entries) must occur somewhere in these lines:

        if self._model.settings.surf_source_write:
            with h5py.File("surface_source_true.h5", 'r') as f:
                source_true = f['source_bank'][()]
                # Convert dtye from mixed to a float for comparison assertion
                source_true.dtype = 'float64'

When you read out the individual lines of source_true you get 12 values in a 1d object, so I think the values are created in the

source_true.dtype = 'float64'

statement. I thought maybe it is not working correctly, because we are looking of an array that contains different <class 'numpy.void'> entries, whereby the first two carry 3 and the other only 1 element.

gridley commented 1 year ago

Hm, unfortunately I'm not too familiar with the HDF5 interface code. Paul will probably have some useful insight here.

cfichtlscherer commented 1 year ago

Thanks and sorry it's kind of a mess - I tried to explain it as I understood it...

paulromano commented 11 months ago

@cfichtlscherer What is the status of this PR? I just read through the comments but I'm confused because it looks like tests are passing fine?

cfichtlscherer commented 11 months ago

Hey @paulromano. Yes, it seems like everything is working fine. Anyway, this is only since I made the small modification in the tests/regression_tests/surface_source/test.py. As described above, this will compare for every particle only 12 entries instead of 13 (as described, I don't know where this mysterious value number 13 is coming from). Maybe you can have a quick check if you understand what's the issue here:

https://github.com/openmc-dev/openmc/pull/2454#issuecomment-1540991788

Otherwise, I am happy to merge it into the code.

An additional comment on that: If we get this merged, we can try to add a score for the killed particles: https://github.com/openmc-dev/openmc/issues/2531

gridley commented 10 months ago

Sorry to take so long to review this.

gridley commented 10 months ago

@paulromano what do you think about this now?

paulromano commented 10 months ago

Thanks for the ping @gridley, I'll take another look after the holiday. Appreciate your patience on this @cfichtlscherer!