Closed Cole-Monnahan-NOAA closed 11 months ago
@Cole-Monnahan-NOAA Thanks for opening this issue. Calling R functions from a c++ thread is not thread safe and causes a C stack overflow in R. Calling other c++ functions in a thread is fine. So far, RMPI is the preferred solution for parallel execution of FIMS models because it's multi-process rather than multi-thread, which means every child process has it's own R instance and all the elements required for a model run are copied over to the child process. Both cases here are multi-process and work as expected. Snowfall is based on RMPI and both require elements to be broadcasted to the child processes.
So there is no way to parallelize within a model, e.g. speed up the population dynamics/expected quantities? That's a hard constraint on the FIMS architecture?
@Cole Monnahan - NOAA Federal @.***> There is only if TMB's parallel code is applicable in the c++ code. In general, population dynamics algorithms aren't conducive to multi-threading due to algorithm structure and race conditions. If there is a place for it, we're good, but calling R functions from a thread won't work.
On Wed, Aug 16, 2023 at 11:22 AM Cole Monnahan @.***> wrote:
So there is no way to parallelize within a model, e.g. speed up the population dynamics/expected quantities? That's a hard constraint on the FIMS architecture?
— Reply to this email directly, view it on GitHub https://github.com/NOAA-FIMS/FIMS/issues/449#issuecomment-1680819239, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABFUSEGMEDYRWJ4UW5SKSDLXVTQTPANCNFSM6AAAAAA3RXWXEM . You are receiving this because you commented.Message ID: @.***>
-- Matthew Supernaw Scientific Software Developer National Oceanic and Atmospheric Administration Office Of Science and Technology NOAA Fisheries | U.S. Department of Commerce Phone 248 - 396 - 7797
Per issue #528 we have a test script that compares multiple ways of running FIMS in parallel in the main branch now. Should we close @Cole-Monnahan-NOAA or is this more about the R interface?
I was hoping there would be a solution to parallelize internally like some TMB models do. It is true as @msupernaw pointed out that many calculations are not parallelizable. I don't have a sense if the making the likelihood calculations parallel would be worthwhile or not. But for now the linked approaches are great for MCMC, simulation, retrospectives, etc. So that's a good thing. Closing..
Is your feature request related to a problem? Please describe.
This issue was split off from this one.
Parallel execution of models is a highly desired outcome for FIMS models. This is particularly true for cloud-based solutions. My understanding is there are two types of parallelization to consider: (1) among models and (2) within a model.
(1) Is common for things like MCMC, simulation testing, and retrospectives. Each model/instance is simply run separately. (2) Is used to speed up a single model run by parallelizing calculations within an assessment. TMB uses an accumulator to do this example here and here. My impression is that this is not likely to create major speedups for stock assessments because most of the calculations are not amendable to being split between processes, in contrast to a regression with many independent data points.
I'm far from an expert on parallel computation so please feel free to jump in!
Describe the solution you would like.
Ideally we'd be able to do both types easily across platforms. I tested (1) using R's snowfall package and it was successful (on Windows) so that's a good sign.
For the linear regression example in TMB the only modification on the user end is to run
openmp(max=TRUE) ## Set max available threads
. That'd be ideal for FIMS users as well.Describe alternatives you have considered
None.
Statistical validity, if applicable
N/A
Describe if this is needed for a management application
N/A
Additional context
N/A