SciML / DiffEqFlux.jl

Pre-built implicit layer architectures with O(1) backprop, GPUs, and stiff+non-stiff DE solvers, demonstrating scientific machine learning (SciML) and physics-informed machine learning methods
https://docs.sciml.ai/DiffEqFlux/stable
MIT License
870 stars 157 forks source link

Add an example with minibatching for sciml_train #133

Closed ChrisRackauckas closed 4 years ago

ChrisRackauckas commented 4 years ago

Currently none of the README examples showcase how to minibatch. It would be a good thing to teach users.

ali-ramadhan commented 4 years ago

I need to learn how to do this so maybe I can take a stab at this.

I guess I could just take the existing Lotka-Volterra neural ODE example and mini-batch inside the loss function if that's the approach to take?

I also have a few notebooks where I'm training a simple diffusion neural PDE that might make for a decent example, but still playing around with new ideas there: https://github.com/ali-ramadhan/neural-differential-equation-climate-parameterizations/blob/master/diffusion_equation/Diffusion%20neural%20PDE%20and%20DAE.ipynb

cems2 commented 4 years ago

See #167 this does a batch but doesn't rotate the batches. But one could build around that template using the new optional data arg to feed minibatches.

ChrisRackauckas commented 4 years ago

We do have an example in the docs now so I'll close this, but we can definitely keep improving it.