-
Hey there!
Thanks for this awsome package! I really love the DiffEq environment ( which made me switch to Julia as language of choice ).
I tried to implement a custom layer as a neural ode. Here…
-
Lagaris, Isaac E., Aristidis Likas, and Dimitrios I. Fotiadis. "Artificial neural networks for solving ordinary and partial differential equations." IEEE Transactions on Neural Networks 9, no. 5 (1998…
-
I hace noticed that a considerable amount of allocation appear when using the function `diffeq_rd` for evaluating the ODE with the tracked parameters. In this example it is shown very clearly:
```jul…
-
```
using Flux, DiffEqFlux, OrdinaryDiffEq, StatsBase, RecursiveArrayTools
using Flux: onehotbatch
using MLDatasets:MNIST
using Base.Iterators: repeated,partition
batch_size=10
train_x, tra…
-
Thanks for this beautiful package!
From the Readme and the blog I understood i) how to use a "normal" ODE as Flux layer (`diffeq_rd`), and ii) how to use a Flux layer to define an ODE (`neural_ode`…
-
```julia
using Flux, DiffEqFlux, DifferentialEquations, Plots
function func(du, u, p, t)
a, b, c, d = p
du[1] = exp(a-b/8.3145/t)*(u[1])^c*(1.0-u[1])^d
end
u0 = [0.05]
tspan = (300.0, …
-
Apologies for the double-submission, am not familiar with best practice yet but pull request #747 is a suggested fix for this
In calling GroupNorm from mapchildren (line 386) I believe "gn,G" shou…
-
From @jessebett
```julia
using Flux, DiffEqFlux, OrdinaryDiffEq, Distributions
const tspan = (0.0f0,1.f0)
const RANGE = (-3.,3.)
const BS = 200
target(u) = u.^3
function gen_data(batchsiz…
-
Here is an 1D problem where the model is trying to learn the function `f(u)=u.^3`. The data is a `(1,200)`-dimensional array, where 200 is the batch size.
```
using Flux, DiffEqFlux, OrdinaryDiffE…
-
When calling train method on the adaptive model, an error is thrown upon collecting the losses from the prediction head (logits_to_loss_per_head, adaptive_model) when logits and labels in prediction h…