Closed rejuvyesh closed 1 year ago
Currently, FMIFlux.jl offers only the possibility to setup custom training loops, but as you say not many implementation examples. We are currently testing different training setups like growing horizon or multiple shooting. As soon as they are camera-ready, we will push them to the repository.
For now, a very easy (but sure not the best) way to train on multiple, different trajectories is to sequentally (one after another) run multiple simulations and compare results to a sequence of data trajectories in the loss function. A more advanced setup for batched training is coming soon as part of the examples folder (we use batches in the paper example for the IC-MSQUARE).
Not necessarily the best way to do this but in case of use, I have a "working" version at batched_fmus.jl.
Overall, I'm not fully sure if CachingTime and using it that useful in the CS
case. But overall I don't fully understand their utility.
Dear rejuvyesh, thanks for the code. We currently have someone on this topic, as soon as everything is setup, we can open a feature branch to merge your code. But this might still take a month ore so.
See this tutorial (Chapter 3: Training) for built-in batching system: https://github.com/ThummeTo/FMIFlux.jl/blob/examples/examples/src/mdpi_2022.ipynb (Tutorial is still WIP)
BTW: A multi-threading version of this is also WIP.
As an alternative one could implement an own batching system of course.
Best regards!
All examples currently train on a single trajectory. Ideally with NNs it would be great if we could do minibatch training. My guess is that it requires running as many FMU as minibatch size in parallel. It's unclear to me what is required to enable that.