JuliaDynamics / Agents.jl

Agent-based modeling framework in Julia
https://juliadynamics.github.io/Agents.jl/stable/
MIT License
725 stars 117 forks source link

Extend to GPUs #70

Open Datseris opened 4 years ago

Datseris commented 4 years ago

To consider much larger ABMs, it would be great if we can extend Agents.jl to handle GPUs.

narayanivedam commented 4 years ago

I want to learn the behaviour of models with more than 10000 nodes. I have been trying to port my code to be able to run on the GPU, but with little success. I would really appreciate if we could get Agents.jl to handle GPUs.

kavir1698 commented 4 years ago

I want to learn the behaviour of models with more than 10000 nodes.

Sure, but 10000 nodes doesn't sound like too many. Further, number of nodes is not much of a limiting factor, number of agents is.

narayanivedam commented 4 years ago

In my case, the #agents=#nodes. Equivalent to using them interchangeably.

kavir1698 commented 4 years ago

I would still give it a try with CPU. I have run simulations with more than 100'000 agents in reasonable time.

narayanivedam commented 4 years ago

The code I was trying to run on the CPU and my efforts at optimising it are in the below link: here. Currently with <10,000> nodes, it takes nearly 28s for one run, and becomes inconvenient for MonteCarlo simulations.

narayanivedam commented 4 years ago

The code I was trying to run on the CPU and my efforts at optimising it are in the below link: here. Currently with <10,000> nodes, it takes nearly 28s for one run, and becomes inconvenient for MonteCarlo simulations.

Do you still think I should give CPU a try?

kavir1698 commented 4 years ago

28s for 5000 steps and 10000 agents and your algorithms is a reasonable time.

narayanivedam commented 4 years ago

5000 steps is the maximum. It usually terminates by 100 steps for the condition is satisfied. And I am wanting to run them for 1000 or more Monte Carlo runs, which takes a few hours.

kavir1698 commented 4 years ago

I see. It wouldn’t be that much of work to implement it in Agents.jl, since you already have a working algorithm. Why not give it a try? 

Libbum commented 4 years ago

In the JuliaCon 2020 Birds of a Feather session on Dynamical Systems, there was a discussion to investigate kernel abstractions for such a task. Unsure if the point was to wrap an entire agents model into a kernel and do parameter estimations in the massively parallel setting, or we step tens of thousands of agents (i.e. wrap step functions in kernels). We can certainly investigate both.

AayushSabharwal commented 2 years ago

I've been thinking about this, since it's certainly something that would be immensely useful. The current codebase for Agents is not easy to port to running on GPUs, and to make even a minimal working example would require a fair bit of work. I think it might be worth starting from scratch for this, supporting only basic features:

Once we have something to work off of, more functionality can be added.

Datseris commented 2 years ago

That sounds fair to me. Agent simulations will never be able to truly run on GPUs due to their nature. But perhaps a heavily restricted pool of possible dynamics is possible. What you write above sounds like a good start, however we should be careful. If we restrict it so much so that it becomes a cellular automaton (e.g. forest fire), then we don't need agents. Just a matrix is enough (again, see forest fire or the temperature dynamics of daisyworld)

AayushSabharwal commented 2 years ago

I agree. The restrictions are only to start off with, which we can gradually loosen as we get more and more stable features. I'd like to start working on this in my free time. Should it be done in a new repository, or in some other way?

Datseris commented 2 years ago

Just a folder gpu with a submodule under src and an ongoing PR is enough I think, but also feel free to open a new repo is that is easier for you.

be sure to have a look at DynamicGrids.jl. It does gpu cellular automata simulations, which is why I'm stressing that we need to be able to do something with movable agents for it to be worth it.

jgamper commented 2 years ago

Not sure if you have seen this, though some of the ideas in this package/paper might be useful?

arxiv: WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

code: https://github.com/salesforce/warp-drive

🚀