SciML / JumpProcesses.jl

Build and simulate jump equations like Gillespie simulations and jump diffusions with constant and state-dependent rates and mix with differential equations and scientific machine learning (SciML)
https://docs.sciml.ai/JumpProcesses/stable/
Other
136 stars 35 forks source link

[Non-actionable] Performance gain of reusing randomness #316

Open Vilin97 opened 1 year ago

Vilin97 commented 1 year ago

I wanted to understand the effect of reusing randomness in methods like NRM (https://github.com/SciML/JumpProcesses.jl/blob/master/src/aggregators/nrm.jl#L103). On the gene expression model reusing randomness results in a 10% performance improvement of NRM (see screenshot) on Julia v1.7+. The script below compares the two ways of sampling the firing time. The result is that using Xoroshiro128Star is about twice (6.7 vs 3.6 ns) slower than reusing randomness, and using the default rng (which is done in JumpProcesses for julia version 1.7+) is about three times slower than reusing randomness (9.5 vs 3.6 ns).

image

using Random, BenchmarkTools, DataStructures, RandomNumbers

"reuse random number"
function reuse_randomness(pq, rx, t, oldrate, cur_rate)
    oldrate / cur_rate * (pq[rx] - t)
end

"generate new random number"
function new_randomness(rng, cur_rate)
    randexp(rng) / cur_rate
end

n = 10^5
firing_times = rand(n)
pq = MutableBinaryMinHeap(firing_times)

t = 0.0
oldrate = 1.0
cur_rate = 1.0
rng1 = Xorshifts.Xoroshiro128Star(rand(UInt64))
rng2 = Random.default_rng()

rx = 100
b1=@benchmark reuse_randomness($pq, $rx, $t, $oldrate, $cur_rate) # median 3.6 ns
b2=@benchmark new_randomness($rng1, $cur_rate) # median 6.7 ns
b3=@benchmark new_randomness($rng2, $cur_rate) # median 9.5 ns