Open edsandorf opened 3 years ago
To increase efficiency the candidate set is created once on master and not distributed to reduce RAM.
Disregard comment above.
To avoid distributing data to the cores manually running the risk of one core finishing before the others and sitting idle, it would be better to implement paralell across Bayesian priors. That way, we don't have do make any changes between MNL or RPL implementations with distributing draws, but could rely fully on the fact that iterating over the priors are "embarrassingly paralell".
Would need to consider how we implement parallel without priors. Potentially we could distribute separate candidate sets to each core, but then how would the MF or RSC algorithms work that iterate on the previous?
At least for small designs and MNL designs, the overhead of communicating across cores is more costly than running each evaluation sequentially. Will be postponed until we start implementing MIXL designs
Optimizing the designs should be done in parallel to reduce computation time. Each optimization can be done independent of other optimizations.
Single core optimization is the default.