Closed jchelly closed 1 month ago
Looking at this again, UpdateParticles() already recomputes the halo mean position and velocity before the extra masking step so as long as they don't get overwritten we're ok.
I can restart the colibre test with this branch and the result is the same as doing all snapshots in one run if I use one thread and no sampling, apart from tiny differences in SpecificSelfPotentialEnergy. Probably should investigate why that one dataset always differs at some point.
There are still no duplicate particles in the test run with this change.
In the new_masking_step branch Nbound is written to the SubSnap output and then modified before the SrcSnap is written. On restarting an out of date value is read from the SubSnap files and the code crashes because Nbound can be greater than the size of the reduced particle arrays read from the SrcSnap.
Here I've modified the code to execute the masking later. This solves the restarting problem because the SrcSnap and SubSnap are consistent.
There's another problem in that the extra masking step eliminates particles which we want to use to compute halo positions and velocities for use after AssignHosts() but still using Nbound from the previous snapshot. It might be necessary to move those calculations earlier in the code.