opendr-eu / opendr

A modular, open and non-proprietary toolkit for core robotic functionalities by harnessing deep learning
Apache License 2.0
619 stars 95 forks source link

ROS nodes FPS performance measurements #419

Closed tsampazk closed 1 year ago

tsampazk commented 1 year ago

This PR adds time performance measurement of tools' inference. For each node, the time it takes to run (only) inference is measured and published in a performance topic. Publishing this message is optional, i.e. the relevant topic needs to be set via argparse. This message can be subscribed to, or echoed to show the current FPS.

If a more accurate performance measurement is needed, one can use the new performance node which subscribes to the performance topic and calculates a running average of the FPS measurement which it prints in the console along with the time it took for the last frame to be processed.

@thomaspeyrucain @ad-daniel Let me know how this looks and whether it's sufficient and/or convenient, before i add it in the rest of the ROS1/2 nodes.

ad-daniel commented 1 year ago

Implementation wise I think this is a nice way of doing it. I have some doubts concerning whether measuring just the infer method is enough though, because in that case I don't expect there to be any real difference between the value obtained from this method and the values already reported in the documentation when the tools were benchmarked using the python API. It's measuring the same thing. So the question is, can we actually use these values to "show" that the performance requirements of the use-case have been met? Because by taking only the infer method, it sounds to me like it's an evaluation of the tool, not a task/use-case. What else should be included and where the boundary should be I don't know.. but before doing the change for all nodes it should be settled

tsampazk commented 1 year ago

Implementation wise I think this is a nice way of doing it. I have some doubts concerning whether measuring just the infer method is enough though, because in that case I don't expect there to be any real difference between the value obtained from this method and the values already reported in the documentation when the tools were benchmarked using the python API. It's measuring the same thing. So the question is, can we actually use these values to "show" that the performance requirements of the use-case have been met? Because by taking only the infer method, it sounds to me like it's an evaluation of the tool, not a task/use-case. What else should be included and where the boundary should be I don't know.. but before doing the change for all nodes it should be settled

Oh i was under the impression that that was what we wanted (measure only tool inference). I'll wait for more input before moving forward.