Closed ChrisRackauckas closed 4 years ago
I need to learn how to do this so maybe I can take a stab at this.
I guess I could just take the existing Lotka-Volterra neural ODE example and mini-batch inside the loss function if that's the approach to take?
I also have a few notebooks where I'm training a simple diffusion neural PDE that might make for a decent example, but still playing around with new ideas there: https://github.com/ali-ramadhan/neural-differential-equation-climate-parameterizations/blob/master/diffusion_equation/Diffusion%20neural%20PDE%20and%20DAE.ipynb
See #167 this does a batch but doesn't rotate the batches. But one could build around that template using the new optional data arg to feed minibatches.
We do have an example in the docs now so I'll close this, but we can definitely keep improving it.
Currently none of the README examples showcase how to minibatch. It would be a good thing to teach users.