This PR takes a suggestion from @wrightky, and separates the routing weight creation from the actual particle random walking itself. Actual use of the code does not change at all from the perspective of someone running functions in routines.py or calling run_iteration() from particle_track.py (which is good).
Internally this PR split get_weight() into make_weight() and get_weight() in lagrangian_walker.py.
make_weight() is called when the Particles class is initialized. All of the routing weights are computed at that time and stored in a L x W x 9 array (where L and W are the length and width of the domain). A progress bar was added for this initialization as it may take some time on larger domains.
get_weight() is now a pretty small function that just calls either the random picker or steepest descent function, depending on the parameters of the run
Example scripts seem to run at about the same speed as they used to, as most of their processing time gets tied up in plot creation and saving. But I think that for people using the run_iteration() functions to simulate many particles, this change will greatly speed things up.
Note: Unit tests of example cases and checks on individual particle stepping all pass because while the weighting calculation is changed, the "random selection" process for the particle walking hasn't changed - meaning that previous tests which set up the rng-seed to create deterministic (reproducible) results are still working! This also implies that any previous simulations conducted with dorado will be identical to new ones, the new simulations will just run faster.
This PR also closes #16 by allowing the documentation to be built on Python 3.9. This implies that dorado will work on Python 3.9, but as we don't explicitly run unit tests on Python 3.9 I don't know that we should say we 'officially' support it ... yet.
Speed comparisons
Comparing the speed of particle routing for code in this PR vs current code. All conditions are steady flow fields, and particles are stepped by iterating over run_iteration() a bunch of times (no routines are used), each condition in the table below is run 3 times and an approximate average time is reported. These are tests run on the example domains. Also worth mentioning is that these time tests were run on Python 3.8.3.
Domain Used
Number of Particles
Number of Iterations
Current Speed [s]
This PR [s]
anuga example domain
100
50
0.8
1.3
anuga example domain
1000
50
8.0
2.6
anuga example domain
5000
50
41.0
8.1
deltarcm example domain
100
50
0.7
3.5
deltarcm example domain
1000
50
7.5
4.8
deltarcm example domain
5000
50
38.0
10.2
The new method of computing weights a priori costs some time upfront compared to the current method of computing weights on the fly. This makes simulations with few particles slower with the new method. Once the number of particles becomes large enough however, the cost of computing weights on the fly catches up to the a priori method - on the example domains the method in this PR is 3-5x faster than the current implementation when 5,000 particles are simulated.
outstanding items to complete before PR is 'ready'
[x] additional testing to ensure weights are being generated as expected
[x] more tests for "edge" cases in both senses of the word (are boundaries acting as expected?)
[x] added test to ensure that "overflow" scenario in 'exact' particle generation works (e.g. when number of particles exceeds seed locations the assignment loops through seed locations until all particles are generated)
[x] added test for scenarios where walk_data is defined but the Particles object does not have explicit information about Np_tracer (number of particles). test to ensure that this is properly counted when run_iteration() is used to iterate the particles in walk_data
[x] some actual time tests against current code to better gauge speed-up
[x] check parallel code to ensure weights only get calculated once there (to ensure it is faster than serial code) - yes, one of the inputs to the parallel_routing() function is a pre-initialized Particles class
[x] make any necessary changes to documentation to cover this procedural change (weight computation a priori vs for each step)
speeding up the random walk
This PR takes a suggestion from @wrightky, and separates the routing weight creation from the actual particle random walking itself. Actual use of the code does not change at all from the perspective of someone running functions in
routines.py
or callingrun_iteration()
fromparticle_track.py
(which is good).Internally this PR split
get_weight()
intomake_weight()
andget_weight()
inlagrangian_walker.py
.make_weight()
is called when theParticles
class is initialized. All of the routing weights are computed at that time and stored in a L x W x 9 array (where L and W are the length and width of the domain). A progress bar was added for this initialization as it may take some time on larger domains.get_weight()
is now a pretty small function that just calls either the random picker or steepest descent function, depending on the parameters of the runExample scripts seem to run at about the same speed as they used to, as most of their processing time gets tied up in plot creation and saving. But I think that for people using the
run_iteration()
functions to simulate many particles, this change will greatly speed things up.Note: Unit tests of example cases and checks on individual particle stepping all pass because while the weighting calculation is changed, the "random selection" process for the particle walking hasn't changed - meaning that previous tests which set up the rng-seed to create deterministic (reproducible) results are still working! This also implies that any previous simulations conducted with
dorado
will be identical to new ones, the new simulations will just run faster.This PR also closes #16 by allowing the documentation to be built on Python 3.9. This implies that
dorado
will work on Python 3.9, but as we don't explicitly run unit tests on Python 3.9 I don't know that we should say we 'officially' support it ... yet.Speed comparisons
Comparing the speed of particle routing for code in this PR vs current code. All conditions are steady flow fields, and particles are stepped by iterating over
run_iteration()
a bunch of times (no routines are used), each condition in the table below is run 3 times and an approximate average time is reported. These are tests run on the example domains. Also worth mentioning is that these time tests were run on Python 3.8.3.The new method of computing weights a priori costs some time upfront compared to the current method of computing weights on the fly. This makes simulations with few particles slower with the new method. Once the number of particles becomes large enough however, the cost of computing weights on the fly catches up to the a priori method - on the example domains the method in this PR is 3-5x faster than the current implementation when 5,000 particles are simulated.
outstanding items to complete before PR is 'ready'
walk_data
is defined but theParticles
object does not have explicit information aboutNp_tracer
(number of particles). test to ensure that this is properly counted whenrun_iteration()
is used to iterate the particles inwalk_data
parallel_routing()
function is a pre-initializedParticles
class__init__.py