Closed Tortar closed 7 months ago
Actually I don't really like that the same id will be given to different agents when using sample!
with a vector
As you can see, the speed-up is very good:
# this pr:
@btime sample!(model, 10^6) setup=(model = fake_model(Xoshiro(42), 10^7)) evals=1 samples=10 seconds=10
276.573 ms (1000010 allocations: 114.44 MiB)
# before:
@btime sample!(model, 10^6) setup=(model = fake_model(Xoshiro(42), 10^7)) evals=1 samples=10 seconds=10
1.947 s (48424 allocations: 168.59 MiB)
(I changed the scope of this pr to be just an optimization of the current function)
Attention: 4 lines
in your changes are missing coverage. Please review.
Comparison is base (
3526b5c
) 92.27% compared to head (30895f9
) 92.18%. Report is 1 commits behind head on main.
Files | Patch % | Lines |
---|---|---|
src/core/model_free_extensions.jl | 0.00% | 4 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Actually I used a model with no space to benchmark, the new method falls behind when there is a space instead. Need to change the code to acknowledge this
should be okay now, this is an example benchmark:
using Agents, BenchmarkTools, Random
# no space model
model_step!(model) = nothing
function fake_model(rng, nagents)
model = StandardABM(NoSpaceAgent; model_step!, rng)
for i in 1:nagents
add_agent!(model)
end
model
end
@btime sample!(model, 10^5) setup=(model = fake_model(Xoshiro(42), 10^6)) evals=1
# space model
function fake_model(rng, nagents)
model = StandardABM(GridAgent{2}, GridSpace((100, 100)); model_step!, rng)
for i in 1:nagents
add_agent!(model)
end
model
end
@btime sample!(model, 10^5) setup=(model = fake_model(Xoshiro(42), 10^6)) evals=1
where we have that
before pr, no space: 87.246 ms (4769 allocations: 16.87 MiB)
after pr, no space: 17.157 ms (100010 allocations: 11.45 MiB)
before pr, space: 162.883 ms (4789 allocations: 16.95 MiB)
after pr, space: 22.714 ms (100010 allocations: 12.98 MiB)
In general, the perf improvement can vary, but it should be always an improvement except for edge cases like 10 agent in a 1 million positions space.
This is possible, and even much faster than when using a Dict (wrong)