helics_cli and its companion database and web interface (hopefully) provide an effective means of managing co-simulations on single compute nodes. As prototypes that we hope to improve on, this is sufficient. For multi-compute node co-simulations, the current approach would not be sufficient as each compute node would have a unique helics_cli which may or may not have visibility to the entire federation and its signals. Even if the observability complications can be resolved, would each helics_cli instance have controllability over the co-simultion, able to stop it for debugging? Similarly, would each helics_cli instance record all singals (as specified by the user), leading to multiple identical databases? By default, the web interface interacts only with the local database instance; how should this interaction be strcutured when multiple helics_cli instances are running?
This issue is designed to remind us of these complications as our prototype is deployed and help document our thinking on it.
helics_cli and its companion database and web interface (hopefully) provide an effective means of managing co-simulations on single compute nodes. As prototypes that we hope to improve on, this is sufficient. For multi-compute node co-simulations, the current approach would not be sufficient as each compute node would have a unique helics_cli which may or may not have visibility to the entire federation and its signals. Even if the observability complications can be resolved, would each helics_cli instance have controllability over the co-simultion, able to stop it for debugging? Similarly, would each helics_cli instance record all singals (as specified by the user), leading to multiple identical databases? By default, the web interface interacts only with the local database instance; how should this interaction be strcutured when multiple helics_cli instances are running?
This issue is designed to remind us of these complications as our prototype is deployed and help document our thinking on it.