Open elfenpiff opened 2 years ago
I think this is something which should be integrated into the introspection, like we have it currently with the used chunks ... maybe in a more sophisticated way then we currently have it but definitely something for the introspection
I agree that the introspection could track this but this is of course runtime overhead in RouDi which ideally can be disabled This should also be considered for the existing introspection topics. While completely eliminating all overhead is not possible without a compile time switch, we could consider gathering and not sending any introspection info depending on a runtime switch.
As for the data we need to estimate a required configuration: peak usage is possible, but may still be inaccurate. In general it will be hard to predict maximum memory usage and impossible if the untyped API is used.
TLDR: tracking peak usage is somewhat doable (we do this for the user data already, but minfree
is a confusingly bad name). Deriving an optimal configuration from particular runs is at least hard if not impossible in general, so this is generally not a simple task. Without the untyped API a worst case estimate should be possible without introspection, i.e. at compile time. This will lead to very large memory consumption estimates though which can be far from reality.
AFAIK the introspection is currently only collecting the statistics and sends it only if there are subscribers
@MatthiasKillat the idea is to provide the peak usage to the developer and with a safety margin of +10% they may be good to go.
But when you look at the amount of ports and have a static communication layout those numbers should be pretty accurate.
Brief feature description
When one would like to deploy iceoryx on a target with tight memory restrictions one has to adjust the memory configuration as well as all the resources in
iceoryx_posh_types.hpp
like condition variables, number of ports, listener properties, capro fifo sizes etc.It would be awesome if one could start the actual system and run it for a while, whilst an additional iceoryx tools runs and states how many of the resources where used after the run. This would help massively finding the optimal configuration for such a system.
The workflow in which such a tool would be used could be:
iceoryx_posh_types.hpp
accordingly and deploys the system. Or better the tool exports a toml and hpp file where the optimal configuration is stored.