Open namdre opened 5 years ago
adding a RNG for every lane roughly doubles memory consumption (tested with /usr/bin/time --verbose on a 100x100 grid, memory consumption grows from 1,08GB to 2,24GB)
Alternatively: add option for setting the number of RNGs (>= number of threads) and use lane numerical id % numRNGs to assign RNG and thread. This allows for repeatable runs when the number of RNGs is kept constant.
I would prefer the alternative mentioned. We can start with a default number of 32 (or 64?) RNGs and issue a warning about the possibility of different results if the user uses more threads. Another drawback of this approach is however that every small change to the network that modifies the number of lanes can change the results of the simulation. Should we rather assign the RNG depending on a hash of the edge id or is it not worth it?
Since we can assign a reference to the RNG once at simulation start, the additional effort of using a hash rather than modulo is negligible and well worth it.
@behrisch I've created branch 'Parallel' to continue work
new numbers:
buildHistory/v0_25_0 2.8438663205 32297.3333333
buildHistory/v0_26_0 3.01626476096 33280.0
buildHistory/v0_27_0 3.3468313885 33738.6666667
buildHistory/v0_28_0 3.66370113079 33730.6666667
buildHistory/v0_29_0 3.76459871051 35629.3333333
buildHistory/v0_30_0 4.16038572417 36850.6666667
buildHistory/v0_31_0 4.08362121644 36733.3333333
buildHistory/v0_31_0-r26751-9f56414 4.61237300273 36812.0
buildHistory/v0_31_0-r26752-739f0ec 4.85348234974 39013.3333333
buildHistory/v0_32_0 4.98972758449 38329.3333333
buildHistory/v0_32_0-1253-g63c03b44b5 4.83993265831 38657.3333333
buildHistory/v0_32_0-2706-g2834c9eaed 5.07848320999 39084.0
buildHistory/v1_0_1 5.4197367181 39494.6666667
is this microseconds per vehicle update?, maybe move this to #4513
Improved speed via parallelisation would be immensely helpful; is there any update on this issue, seeing as it didn't make it into the 1.2.0 milestone? Thanks :)
No progress here at the moment, we are sorry. It is on our timeline for this year.
What about workout a brand new framework for parallel microscope simulation? SUMO is irreplaceable ? Parallelisation is very improtant in future.
hello everyone! I can only confirm that being able to leverage multi-core machines is key to be able to scale simulations beyond small scenarios. It seems work is already in some sort of advanced status, although I had only a quick look at it. I can only +1 vote for this issue to be solved quickly.
[x] planMove
[ ] executeMove
[ ] laneChange