JuliaDynamics / ABMFrameworksComparison

Benchmarks and comparisons of leading ABM frameworks
Other
9 stars 7 forks source link

Use distributed computing (parallelization) in one of the models? #55

Open Datseris opened 11 months ago

Datseris commented 11 months ago

The discussion here https://discourse.julialang.org/t/ann-vahana-jl-framework-for-large-scale-agent-based-models/102024 made me realize: Agents.jl allows distributed computing straightforwardly when e.g scanning parameters or running a model several times with different seeds to get statistical convergence.

Yet, none of the comparisons here utilize this. Is this fair to us? Probably not. Should we modify one of the existing exaMPLES so that instead of running a model once, it runs 1000 models each with different rng seed? And each framework may use whatever (API declared) tools to accelarate this computation?

@Tortar thoughts?

Tortar commented 11 months ago

I think that in a sense it should be enough to extrapolate that running models in parallel should have a speed-up equal to the numbers of cores available (if the memory usage is not higher than the available one).

Certainly, if things are not managed optimally by each framework the speed-up could be less than the number of cores available, and would be interesting to check anyway.

Unfortunately, we have only two-cores machines :P https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources, so...

Datseris commented 11 months ago

I think that in a sense it should be enough to extrapolate that running models in parallel should have a speed-up equal to the numbers of cores available (if the memory usage is not higher than the available one).

Sure, but this doesn't take into account how easy it is to run in parallel. In Agents.jl it is relatively simple: https://juliadynamics.github.io/Agents.jl/stable/api/#Agents.ensemblerun!

We are not only comparing performance in this repo, but also simplicity.