Closed tschruff closed 2 years ago
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % | ||
---|---|---|---|---|---|
spotpy/parallel/mpi.py | 0 | 13 | 0.0% | ||
<!-- | Total: | 5 | 18 | 27.78% | --> |
Files with Coverage Reduction | New Missed Lines | % | ||
---|---|---|---|---|
spotpy/parallel/mpi.py | 1 | 0.0% | ||
spotpy/algorithms/dream.py | 6 | 89.22% | ||
<!-- | Total: | 7 | --> |
Totals | |
---|---|
Change from base Build 808: | -0.1% |
Covered Lines: | 4133 |
Relevant Lines: | 4601 |
Dear Tobias,
your pull request looks quite promising - I never had a case where I needed anything else than MPI_COMM_WORLD. However, due to a large malware attack on our university, we can't perform local tests of your PR. It will take some time until we have the resources to integrate your PR into master.
Best regards,
Philipp
@philippkraft I wonder if you may have the opportunity to look at this again? I was working with @tschruff on using spotpy with a model that features parallelisation, therefore our use case was being able to have to levels of parallelism - one with spotpy and one with our model.
Use case
In some situations, users don't want to use
MPI_COMM_WORLD
in spotpy, but a custom communicator. E.g. to runspotpy
only on selected processes. This can be achieved easily by providing a custom communicator object whenmpi.ForEach
is created.Additionally, some advanced setups use parallelization such as
MPI
internally to crunch the numbers insideheavy_setup.simulate()
, thereby establishing a second level of parallelization. This requires a custom communicator as well as a custom way to terminate spotpy workers, e.g. terminate setup workers as well.Solutions
MPI
intra-communicator can be passed to_algorithm
viaparallel_kwargs
(implementations support the concept of argument forwarding already, nothing to do here)on_worker_terminate
callback can be passed to_algorithm
via theparallel_kwargs