ThummeTo / FMIFlux.jl

FMIFlux.jl is a free-to-use software library for the Julia programming language, which offers the ability to place FMUs (fmi-standard.org) everywhere inside of your ML topologies and still keep the resulting model trainable with a standard (or custom) FluxML training process.
MIT License
57 stars 15 forks source link

Batched Training #12

Closed rejuvyesh closed 1 year ago

rejuvyesh commented 3 years ago

All examples currently train on a single trajectory. Ideally with NNs it would be great if we could do minibatch training. My guess is that it requires running as many FMU as minibatch size in parallel. It's unclear to me what is required to enable that.

ThummeTo commented 3 years ago

Currently, FMIFlux.jl offers only the possibility to setup custom training loops, but as you say not many implementation examples. We are currently testing different training setups like growing horizon or multiple shooting. As soon as they are camera-ready, we will push them to the repository.

For now, a very easy (but sure not the best) way to train on multiple, different trajectories is to sequentally (one after another) run multiple simulations and compare results to a sequence of data trajectories in the loss function. A more advanced setup for batched training is coming soon as part of the examples folder (we use batches in the paper example for the IC-MSQUARE).

rejuvyesh commented 3 years ago

Not necessarily the best way to do this but in case of use, I have a "working" version at batched_fmus.jl.

rejuvyesh commented 3 years ago

Overall, I'm not fully sure if CachingTime and using it that useful in the CS case. But overall I don't fully understand their utility.

ThummeTo commented 3 years ago

Dear rejuvyesh, thanks for the code. We currently have someone on this topic, as soon as everything is setup, we can open a feature branch to merge your code. But this might still take a month ore so.

rejuvyesh commented 2 years ago

Updated the gist to most recent version of FMIFlux.

ThummeTo commented 1 year ago

See this tutorial (Chapter 3: Training) for built-in batching system: https://github.com/ThummeTo/FMIFlux.jl/blob/examples/examples/src/mdpi_2022.ipynb (Tutorial is still WIP)

BTW: A multi-threading version of this is also WIP.

As an alternative one could implement an own batching system of course.

Best regards!