Closed pcaillou closed 8 years ago
Hi,
We are discussing this possibility and it's not obvious how to introduce it without
requiring, from the user, some "know-how" about concurrency. Contrary to what you write,
concurrency problems may arise when running reflexes in parallel, depending on what
the reflexes manipulate. Just changing the location of the agent will involve concurrent
accesses to the environment, for instance.
Anyway, as I said, we are thinking about introducing some concurrency at some point,
but it will require careful moves.
Do not hesitate in sharing your needs/ideas !
Cheers
Alexis
Original issue reported on code.google.com by alexis.drogoul
on 2013-12-07 06:14:15
As the execution order of the reflexes scheduled at the same time step is not (strictly)
defined (by specification), introducing parallel execution would not cause any problem.
The ask statement could allow a context to be locked, i.e no other read/write access
is done. That would prevent the exposure of inconsistent states. (Oh, it would be great
to model the dining philosophers with a multi-threaded GAMA!)
In general I would think that ABM naturally favours parallel execution, especially
if the simulation progresses in small steps (i.e inconsistent states are still a good
approximation of "reality").
As simulations can become computationally intensive, any means of speeding them up
is highly welcome (that includes me doing smarter modelling :-) ).
Original issue reported on code.google.com by achim.gaedke@signal41.com
on 2013-12-19 01:59:24
Hi,
I want to know more details about your needs, because now i've on my computer, a version
which can launch multi-thread simulation in headless mode, i mean multi-instance of
one simulation. It's look like we run 100 times of simulation, but in multi-thread.
So i want to know if your speedup needs is related with this mode or it's an sharable
multi-thread of the agents in one simulation? i mean, in your case, if we have 10000
agents, we must have 10000 threads or something like that?
Cheers.
Original issue reported on code.google.com by hqnghi88
on 2013-12-19 09:22:31
Hi!
Thanks for coming back and asking. That is much appreciated.
It would be great to use all cores of the quadcore machine to speed up the run of a
simulation (assuming that memory access is not the bottle neck). My simulations comprise
10 to 100 thousand agents, so I'd be happy to have four worker threads.
Cheers, Achim
Original issue reported on code.google.com by achim.gaedke@signal41.com
on 2013-12-19 22:15:44
(No text was entered with this change)
Original issue reported on code.google.com by patrick.taillandier
on 2014-02-15 11:16:16
(No text was entered with this change)
Original issue reported on code.google.com by gama.platform
on 2014-04-06 09:43:02
(No text was entered with this change)
Original issue reported on code.google.com by gama.platform
on 2014-04-06 09:55:18
As a side note and reminder: the current version of GAMA supports multi-simulations experiments (but within a single thread). The solution for transforming this architecture into a multi-threaded run is not trivial, and probably involves using ThreadLocal values for all the outputs (for example, the displays maintain a state which happens to be shared among the simulations). But it is feasible: without outputs, it runs almost perfectly.
The basic proof of concept has been committed and works quite well with the displays, too — multi-simulations now run in a multi-threaded way, but they are synchronized on the steps (i.e. the steps of the simulations run in parallel, but the experiment waits, every step, for the termination of all these parallel steps). I have still to add:
experiment
to enable or disable this behavior (and maybe fix the max. number of threads, too)Batch experiments now use multi-threading partially (i.e. when running parallel repetitions of the simulations). Not sure if opening it more would be helpful. In any case, the speed gain is impressive when doing a large number of repetitions. I close this issue.
Recent developments support to run agents, within their species/grid, or as targets of a ask
statement, in parallel. It is still experimental, but can be tested on the latest builds.
There are 2 possibilities to test it:
grid/species
definitions or ask
statements the facet parallel: true
. The value of this facet can also be set to an integer, which then represents the minimum number of agents under which the run will be sequential. So, for instance, ask my_agents parallel: 1 { ... do something ... }
will make the agents run all in parallel. And species aa parallel: 50 {...}
will step in parallel batches of 50 agents. Having this level of control is important because, for simple agents, the cost of creating parallel tasks and scheduling them among the threads can be higher than their execution time, therefore ruining the interest of running them in parallel.A default value for this 'sequential threshold' can be set in the preferences too.
Note that it is an experimental feature. Depending on the models, unexpected conflicts or errors can happen (esp. if the agents share common structures or manipulate each other). Also, although the parallel runs try to keep as much as the original sequence, there are no guarantees that the agents will be scheduled in the order defined by the modeler.
That said, preliminary testings show vast improvements for models run on multi-core architectures (up to 3x faster in some cases).
Enjoy !
Original issue reported on code.google.com by
Achim.Gaedke
on 2013-11-25 06:19:44