Closed andrechalom closed 8 years ago
New improvement, now bdm
accepts a "count" argument, so the loop is made inside C code:
> microbenchmark(for(i in 1:50000) bdm(), bdm(50000))
Unit: milliseconds
expr min lq mean median uq max neval cld
for (i in 1:50000) bdm() 678.1162 688.9477 692.2319 690.8555 692.5461 793.9037 100 b
bdm(50000) 556.0742 557.7395 558.3793 558.2241 558.7047 561.8939 100 a
"Caching" the death rate slope (b-d0)/K on the constructor instead of repeating this every time gave me more 5% speedup. Now I don't see how can we improve it more, as the N vector needs to be recalculated at every step (because changing any abundance changes every position in N), and the multiplication step in abundance * interaction is responsible for over 90% of the running time.
Update: more 2% speedup as I collapsed two lines of code, now d is calculated directly, without needing N.
Waiting checks from PI to close this and remove Rbdm() from source.
Removed from main branch; latest version with original bdm and run.bdm is 24a50ad
Transferred
bdm
code to C++. Temporarily renamed bdm function to Rbdm in order to run benchmarks. C implementation is 10 times faster:Now we need to run some optimizations on the C code.