SciML / SymbolicIndexingInterface.jl

A general interface for symbolic indexing of SciML objects used in conjunction with Domain-Specific Languages
https://docs.sciml.ai/SymbolicIndexingInterface/stable/
MIT License
14 stars 6 forks source link

Performance of `getu` vs `SymbolicIndexingInterface.observed` #39

Closed SebastianM-C closed 7 months ago

SebastianM-C commented 7 months ago

I looked at a couple of cases comparing getu and SII.observed and for unknowns, getu seems to have ideal performance (i.e. same as just direct indexing in the vector). When computing the value for expressions, it looks like SII.observed is faster, but for some reason if I redefine the function for SII.observed after using getu, I get the same performance as getu. Does getu replace the cached generated function or am I missing something?

I also tried this on a larger model and there the time for SII.observed was still around 100ns for a simple expression, but the getu was slower. Is the time for getu expected to increase with the size of the model?

using ModelingToolkit, SymbolicIndexingInterface, OrdinaryDiffEq
using BenchmarkTools

iv = only(@variables(t))
sts = @variables s1(t) = 2.0 s1s2(t) = 2.0 s2(t) = 2.0
ps = @parameters k1 = 1.0 c1 = 2.0
eqs = [(Differential(t))(s1) ~ -0.25 * c1 * k1 * s1 * s2
    (Differential(t))(s1s2) ~ 0.25 * c1 * k1 * s1 * s2
    (Differential(t))(s2) ~ -0.25 * c1 * k1 * s1 * s2]

@named sys = ODESystem(eqs)

prob = ODEProblem(sys, [], (0.0, 1.0))
sol = solve(prob, Tsit5())

### unknowns

obs = SymbolicIndexingInterface.observed(sol, s1)
@btime $obs($sol.u[2], parameter_values($sol), $sol.t[2])
# 101.366 ns (5 allocations: 752 bytes)

@btime $obs.($sol.u, (parameter_values($sol),), $sol.t)
# 1.420 μs (33 allocations: 5.53 KiB)

f = getu(sol, s1)
@btime $f($sol, 2)
# 3.100 ns (0 allocations: 0 bytes)

@btime $f($sol)
# 27.182 ns (1 allocation: 96 bytes)

@btime $sol.u[2][1]
# 3.100 ns (0 allocations: 0 bytes)

@btime view($sol.u, 2, :)
# 23.069 ns (2 allocations: 96 bytes)

### observed

obs2 = SymbolicIndexingInterface.observed(sol, s1 + s2)
@btime $obs2($sol.u[2], parameter_values($sol), $sol.t[2])
# 104.449 ns (5 allocations: 752 bytes)

f2 = getu(sol, s1 + s2)
@btime $f2($sol, 2)
# 357.009 ns (6 allocations: 832 bytes)

# but after evaling the `getu` the SII.observed is slowed down if we recreate obs2

obs2 = SymbolicIndexingInterface.observed(sol, s1 + s2)
@btime $obs2($sol.u[2], parameter_values($sol), $sol.t[2])
# 355.856 ns (6 allocations: 832 bytes)

sol_long = solve(prob, Tsit5(), saveat=0:0.001:1)

obs3 = SymbolicIndexingInterface.observed(sol_long, s1 + 2s2)
@btime $obs3($sol_long.u[2], parameter_values($sol_long), $sol_long.t[2])
# 105.826 ns (5 allocations: 752 bytes)

f3 = getu(sol_long, s1 + 2s2)
@btime $f3($sol_long, 2)
# 342.534 ns (6 allocations: 832 bytes)

### larger model

using ModelingToolkit, OrdinaryDiffEq
using ModelingToolkitStandardLibrary.Electrical
using ModelingToolkitStandardLibrary.Blocks: Sine

function create_model()
    # V = 10.0
    @variables t
    @named resistor1 = Resistor(R=5.0)
    @named resistor2 = Resistor(R=2.0)
    @named capacitor1 = Capacitor(C=2.4)
    @named capacitor2 = Capacitor(C=60.0)
    @named source = Voltage()
    @named input_signal = Sine(frequency=1.0)
    @named ground = Ground()
    @named ampermeter = CurrentSensor()

    eqs = [connect(input_signal.output, source.V)
        connect(source.p, capacitor1.n, capacitor2.n)
        connect(source.n, resistor1.p, resistor2.p, ground.g)
        connect(resistor1.n, capacitor1.p, ampermeter.n)
        connect(resistor2.n, capacitor2.p, ampermeter.p)]

    @named circuit_model = ODESystem(eqs, t,
        systems=[
            resistor1, resistor2, capacitor1, capacitor2,
            source, input_signal, ground, ampermeter,
        ])
end

model = create_model()
sys = complete(structural_simplify(model))

prob = ODEProblem(sys, [], (0.0, 3.0), [])
sol = solve(prob, Rodas4())

### unknowns

obs = SymbolicIndexingInterface.observed(sol, sys.capacitor1.i)
@btime $obs($sol.u[10], parameter_values($sol), $sol.t[10])
# 102.516 ns (5 allocations: 752 bytes)

@btime $obs.($sol.u, (parameter_values($sol),), $sol.t)
# 6.220 μs (253 allocations: 38.19 KiB)

f = getu(sol, sys.capacitor1.i)
@btime $f($sol, 2)
# 3.100 ns (0 allocations: 0 bytes)

@btime $f($sol)
# 68.029 ns (1 allocation: 448 bytes)

@btime $sol.u[2][1]
# 3.100 ns (0 allocations: 0 bytes)

@btime view($sol.u, 2, :)
# 23.494 ns (2 allocations: 96 bytes)

### observed

obs2 = SymbolicIndexingInterface.observed(sol, abs2(sys.capacitor1.i))
@btime $obs2($sol.u[2], parameter_values($sol), $sol.t[2])
# 104.817 ns (5 allocations: 752 bytes)

f2 = getu(sol, abs2(sys.capacitor1.i))
@btime $f2($sol, 2)
# 714.062 ns (6 allocations: 832 bytes)

# but after evaling the `getu` the SII.observed is slowed down if we recreate obs2

obs2 = SymbolicIndexingInterface.observed(sol, abs2(sys.capacitor1.i))
@btime $obs2($sol.u[2], parameter_values($sol), $sol.t[2])
# 715.328 ns (6 allocations: 832 bytes)

I was wondering if the slowdown in getu was related to the length of the solution, but I tested that and it's not it.

Versions:

  [961ee093] ModelingToolkit v8.75.0
  [2efcf032] SymbolicIndexingInterface v0.3.4
  [d1185830] SymbolicUtils v1.5.0
  [0c5d862f] Symbolics v5.16.1

and julia 1.10

AayushSabharwal commented 7 months ago

I'm having trouble with this bug, but getu and observed are identical (the former calls the latter behind the scenes if the symbol isn't a direct index into the state vector). The case you've found where observed is slowed down after calling getu also works the other way around: if getu is called first and then observed, obs2 is slower if getu is called again, f2 is slowed down:

# Using the same initial model
julia> f2 = getu(sol, s1 + s2)
julia> @btime $f2($sol, 2)
  88.906 ns (5 allocations: 752 bytes)
julia> obs2 = SymbolicIndexingInterface.observed(sol, s1 + s2)
julia> @btime $obs2($sol.u[2], parameter_values($sol), $sol.t[2])
  228.753 ns (6 allocations: 832 bytes)
julia> f2 = getu(sol, s1 + s2) # recreate f2
julia> @btime $f2($sol, 2)
  229.476 ns (6 allocations: 832 bytes)

I think this is because of how the observed function in ModelingToolkit.jl/src/systems/diffeqs/abstractodesystem.jl works. The first time it is called, it has to compile a function for the expression, which it then caches in a Dict{Any, Any}. Subsequent calls to observed with the same expression just look up that function in the cache, which slows it down. This might be because when first called, Julia knows the type of the generated function but subsequent lookups infer it as Any.

The view in your example is wrong, and will give an incorrect result. It should be view(sol, 2, :)

SebastianM-C commented 7 months ago

The case you've found where observed is slowed down after calling getu also works the other way around

Oh, that explains why I thought it's slower.

The view in your example is wrong, and will give an incorrect result. It should be view(sol, 2, :)

Thanks for pointing that out.

ChrisRackauckas commented 7 months ago

https://github.com/SciML/SymbolicIndexingInterface.jl/pull/41