An acausal modeling framework for automatically parallelized scientific machine learning (SciML) in Julia. A computer algebra system for integrated symbolics for physics-informed machine learning and automated transformations of differential equations
I'm seeing a bug in the order of the expressions in initializeprobmap with MTKNN. Currently MTKNN forces the defaults for the nn inputs to 0s due to some initialization warning:
┌ Warning: Internal error: Variable (nn₊input₊u(t))[2] was marked as being in 0 ~ (LuxCore.stateless_apply(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 5, tanh), layer_2 = Dense(5 => 5, tanh), layer_3 = Dense(5 => 2)), nothing), nn₊input₊u(t), convert(nn₊T, nn₊p)))[2] - (nn₊output₊u(t))[2], but was actually zero
└ @ ModelingToolkit.StructuralTransformations ~/.julia/dev/ModelingToolkit/src/structural_transformation/utils.jl:237
If I don't provide the defaults, then I hit the OverrideInit dispatch for _initialize_dae! and the generated code for the initializeprobmapgetu is
Note how var"(nn₊input₊u(t))[1]", var"(nn₊input₊u(t))[2]") are used before they are declared.
Expected behavior
solve working
Minimal Reproducible Example 👇
Without MRE, we would only be able to help you to a limited extent, and attention to the issue would be limited. to know more about MRE refer to wikipedia and stackoverflow.
using Test
using ModelingToolkitNeuralNets
using ModelingToolkit
using ModelingToolkitStandardLibrary.Blocks
using OrdinaryDiffEqRosenbrock, OrdinaryDiffEqNonlinearSolve
using SymbolicIndexingInterface
using StableRNGs
function lotka_ude()
@variables t x(t) = 3.1 y(t) = 1.5
@parameters α = 1.3 [tunable = false] δ = 1.8 [tunable = false]
Dt = ModelingToolkit.D_nounits
@named nn_in = RealInputArray(nin=2)
@named nn_out = RealOutputArray(nout=2)
eqs = [
Dt(x) ~ α * x + nn_in.u[1],
Dt(y) ~ -δ * y + nn_in.u[2],
nn_out.u[1] ~ x,
nn_out.u[2] ~ y
]
return ODESystem(
eqs, ModelingToolkit.t_nounits, name=:lotka, systems=[nn_in, nn_out])
end
function lotka_true()
@variables t x(t) = 3.1 y(t) = 1.5
@parameters α = 1.3 β = 0.9 γ = 0.8 δ = 1.8
Dt = ModelingToolkit.D_nounits
eqs = [
Dt(x) ~ α * x - β * x * y,
Dt(y) ~ -δ * y + δ * x * y
]
return ODESystem(eqs, ModelingToolkit.t_nounits, name=:lotka_true)
end
model = lotka_ude()
chain = multi_layer_feed_forward(2, 2)
@named nn = NeuralNetworkBlock(2, 2; chain, rng=StableRNG(42))
eqs = [connect(model.nn_in, nn.output)
connect(model.nn_out, nn.input)]
ude_sys = complete(ODESystem(
eqs, ModelingToolkit.t_nounits, systems=[model, nn],
name=:ude_sys,
# defaults=[nn.input.u => [0.0, 0.0]]
))
sys = structural_simplify(ude_sys)
prob = ODEProblem{true,SciMLBase.FullSpecialize}(sys, [], (0, 1.0), [])
iprob = ModelingToolkit.InitializationProblem(sys, 0.0)
solve(iprob)
solve(prob, Rodas5P())
Describe the bug 🐞
I'm seeing a bug in the order of the expressions in
initializeprobmap
with MTKNN. Currently MTKNN forces the defaults for the nn inputs to 0s due to some initialization warning:If I don't provide the defaults, then I hit the
OverrideInit
dispatch for_initialize_dae!
and the generated code for theinitializeprobmap
getu
isNote how
var"(nn₊input₊u(t))[1]", var"(nn₊input₊u(t))[2]")
are used before they are declared.Expected behavior
solve
workingMinimal Reproducible Example 👇
Without MRE, we would only be able to help you to a limited extent, and attention to the issue would be limited. to know more about MRE refer to wikipedia and stackoverflow.
Error & Stacktrace ⚠️
Environment (please complete the following information):
using Pkg; Pkg.status()
using Pkg; Pkg.status(; mode = PKGMODE_MANIFEST)
versioninfo()
Additional context
Add any other context about the problem here.