Closed julienl-met closed 3 years ago
You can serialize / deserialize the FMU state if the FMU supports it ( canGetAndSetFMUstate="true"
). Take a look at https://github.com/CATIA-Systems/FMPy/blob/master/tests/test_serialize_fmu_state.py for an example.
Hello @t-sommer ,
I know the issue is been closed for a while now, but to me a part of the issue is still unanswered: it is possible to serialize the actual FMU model (not the state) with pickle/cloudpickle/dill/joblib ? They seem to throw of an error because they can't serialize ctypes properly (see https://github.com/uqfoundation/dill/issues/342)
Do you know of any workarounds ?
thanks !
The FMU instance itself cannot be serialized because it depends on the extracted FMU and loaded shared library. However you could create a wrapper that implements __setstate__
and __getstate__
to serialize and deserialize the dependencies.
Can you elaborate on why it would be necessary to pickle the whole object vs. just the FMU state?
Our use case might be specific, so I will give some context while simplifying a bit.
We currently have two types of modeling stacks : Dymola FMUs and Python scikit-learn models. We use them differently of course, but as we are developing tools to manage the lifecycle of the python models (sometimes refereed as MLOps), we were wondering how much of it could we apply to FMUs.
Some patterns we use are the Model Registry, fancy name of a Python object storage system used to version and access models , Model Deployment where we package models with a Docker image and create API endpoints, and Model Serving to query the models and get the predictions.
If you'd like to know more, I can go into more details and you get checkout the tool we are currently implementing this with: bentoml.
Using serialized Python files for the models would greatly simplify our tasks, as we could use the Model Registry to version, store and fetch FMUs out of the box. Model deployment, with a custom Docker image to include FMU dependencies is easily adapted. However it is expected that we would have to rewrite the serving anyway, to adapt to the FMUs simulation mechanics
Do you know how any other ways or tools to quickly and simply "deploy" FMUs ?
thanks in advance
P.S.: stumbled on this project after writing my message, and it's the closest thing I've seen to what we would like to do https://github.com/mbonvini/LambdaSim
Can't you just read the *.fmu
into a byte array (that can be serialized) and restore it when you need it?
I'm currently working with a model whose simulation lasts about 240seconds. It could be great if this simulation was quicker since I've to run it hundreds of times. The only difference between the simulations I have to run are the inputs that I inject.
The specificity here is that I'm not running all my simulations at the same time because I get the inputs that I want to inject in my model at a frequency that I do not control: when a set of inputs is ready, I launch a simulation. If I had wanted to run all the simulations at the same time, I would have follow https://github.com/CATIA-Systems/FMPy/issues/30#issuecomment-372374408.
When I put loggers into function
simulate_fmu
, I get following information:initialize
takes a lot of time (about 140sec)What I'd like (if it is possible)
initialize
initialize
and then run the simulation loop.What I've already tried
initialize
call withjoblib
andpickle
but it does not work (Can't pickle <class 'ctypes.CDLL.__init__.<locals>._FuncPtr'>: it's not found as ctypes.CDLL.__init__.<locals>._FuncPtr
)Questions