Closed bodhinandach closed 4 years ago
The cell->nglobal_particles()
is basically a count across MPI ranks, so that would be sufficient to compute the load. We might also need to consider the number of shared nodes. The difficult part is not whether or not to do a load balancing, but for KaHIP to keep most of the partition the same, so the load balancing doesn't have to move a lot of particles. I noticed, between one partitioning to another it does change a lot, so I'm trying to see how to keep most of the partition the same.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@kks32 I am reopening this. Are we going to take a look any time soon?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Describe the feature At the moment, I see that we are specifying the
nload_balance_steps_
. Wouldn't it be nice to make it automatic?Describe alternatives Do
MPI_Gather
thenparticles()
from each rank in rank 0 at the end of each step and judge there, e.g. we can make a limit on how many percents it is different tonparticles()/mpi_size
to increase/reduce the automatic balancing frequency.Additional context This is particularly useful when simulating problems like landslides, where initially the particle moves rapidly, but in the end they are almost static and stop propagating. If we use a constant
nload_balance_steps_
, we will perform quite uselessdomain_decomposition
at the end.