SciML / SciMLDocs

Global documentation for the Julia SciML Scientific Machine Learning Organization
https://docs.sciml.ai
MIT License
62 stars 44 forks source link

Deep Bayesian Model Discovery without using NeuralODE object #169

Open gurtajbir opened 1 year ago

gurtajbir commented 1 year ago

Hi everyone. I am trying to implement the Deep Bayesian Model Discovery on the Lotka-Volterra model discussed in the Automated Discovery of Missing Physics example. The problem I am facing is that I am not able to figure out a way to pass the parameters of the neural network embedded in the ODE of the Lotka-Volterra model to the Hamiltonian as done here. The main issue here is that the hamiltonian is fed a vector of parameters and they are updated naturally as the optimization is carried out. I am having trouble achieving the same with the missing physics example. Any pointers as to how this can be achieved will be very helpful. Thanks.

ChrisRackauckas commented 1 year ago

What have you tried so far? You'd do the exact same thing except change the optimization to the Bayesian fitting routine.

gurtajbir commented 1 year ago

Hi Chris. I was trying to extract the parameters of the Flux model as a vector using a function like the following for a model U,

`p_model = [] for i in 1:length(U.layers) weights = Float64.(Flux.params(U.layers[i])[1]) for row in weights p_model = [p_model; row] end biases = Float64.(Flux.params(U.layers[i])[2]) p_model = [p_model; biases] end'

This was now in the correct format to be fed into metric = DiagEuclideanMetric(length(p)) and integrator = Leapfrog(find_good_stepsize(h, p)). But the trouble was with converting the update to the parameter vector back to something that could be used by the neural network in the UDE dynamics.

gurtajbir commented 1 year ago

I had posted the same on Julia discourse. I was advised to use θ, re = Flux.destructure(chain) which now enables me to make predictions in the UDE dynamics using the new set of parameters to re . This implementation on my part is inefficient but it able to get the job done as in it is able to take the new parameters and make a prediction based on that function ude_dynamics!(du, u, p, t, p_true) U = re(p) # Reconstruct with passed parameters p û = U(u) # Network prediction du[1] = p_true[1] u[1] + û[1] du[2] = -p_true[4] u[2] + û[2] end

gurtajbir commented 1 year ago

What have you tried so far? You'd do the exact same thing except change the optimization to the Bayesian fitting routine.

What change are you referring to here? Do you mean a change to the loss l(θ) = -sum(abs2, ode_data .- predict_neuralode(θ)) - sum(θ .* θ) or somewhere here integrator = Leapfrog(find_good_stepsize(h, p)) prop = AdvancedHMC.NUTS{MultinomialTS, GeneralisedNoUTurn}(integrator) adaptor = StanHMCAdaptor(MassMatrixAdaptor(metric), StepSizeAdaptor(0.45, integrator)) samples, stats = sample(h, prop, p, 500, adaptor, 500; progress = true)

I do not have a strong background in Bayesian Inference so most of this stuff is new to me. I did go on try exactly the same as in the tutorial and got the following result

Screenshot 2023-08-02 at 6 50 32 PM
ChrisRackauckas commented 1 year ago

I highly recommend just changing to a Lux network instead of a Flux one. Is the tutorial using Flux? If so we should just update it

Vaibhavdixit02 commented 1 year ago

@gurtajbir it looks like you are on the right track, how many samples is that plot from? Are you dropping the warmup samples?

gurtajbir commented 1 year ago

@gurtajbir it looks like you are on the right track, how many samples is that plot from? Are you dropping the warmup samples?

Hi @Vaibhavdixit02 . This plot was using the below samples, stats = sample(h, prop, p, 500, adaptor, 2000; progress = true). These particular values were used according to the code provided here.

gurtajbir commented 1 year ago

I highly recommend just changing to a Lux network instead of a Flux one. Is the tutorial using Flux? If so we should just update it

Hi @ChrisRackauckas. Just changing to Lux seemed to considerably improve how the plot looked. Also, with same network size and activation function, the code with Lux faster than that with Flux (almost 3 times as fast). Screenshot 2023-08-10 at 3 49 32 PM

ChrisRackauckas commented 1 year ago

Yeah that's expected. Could you make a PR to update that code?

gurtajbir commented 1 year ago

Would you like me to update the existing deep bayesian discovery example in the PR or create a code file that resembles what I am trying out on Lotka-Volterra ?

ChrisRackauckas commented 1 year ago

Updating the deep bayesian discovery example in a PR that changes it to Lux (and ultimately improves its stability) would be perfect.

gurtajbir commented 1 year ago

Sounds good. I'll get started on it.

Doublonmousse commented 7 months ago

I've made a gist redoing https://docs.sciml.ai/Overview/stable/showcase/bayesian_neural_ode/#Step-5:-Plot-diagnostics with Lux and no NeuralODE object here : https://gist.github.com/Doublonmousse/64c5226edc8419dc6bf8e0594f2cb89f.

If this good enough ? I'll do a PR to update the example in this repo if it's okay

ChrisRackauckas commented 7 months ago

Yeah that looks good, though I don't think you need the Metal import?

Doublonmousse commented 7 months ago

Yeah, I'm not using it (it's also imported twice ..).

Maybe vector_to_parameters is superfluous or can be improved as well?

I'll open a PR for it then soon so that we can discuss further things there.

Doublonmousse commented 7 months ago

Wait, the github version is already up to date with Lux https://github.com/SciML/SciMLDocs/blob/df454f474b630b5b37523dd058292b0416c9d735/docs/src/showcase/bayesian_neural_ode.md?plain=1#L21-L22

With the change dating back from https://github.com/SciML/SciMLDocs/pull/217

but the live web version isn't up to date yet.

ChrisRackauckas commented 7 months ago

There is currently an issue with the doc building due to a bug in CUDA https://github.com/SciML/SciMLDocs/pull/224