robotology / human-dynamics-estimation

Software repository for estimating human dynamics
BSD 3-Clause "New" or "Revised" License
83 stars 28 forks source link

Investigate rviz frames per second drop during pHRI visualization #134

Closed yeshasvitirupachuri closed 5 years ago

yeshasvitirupachuri commented 5 years ago

Today while trying to setup pHRI experiment for videos I noticed that rviz is very slow due to drop in frames per second(fps).

Test configuration

The ICub wearable device is launched on my machine. HDE is run on icub30 machine with pHRI configuration file. After running just single yarprobotstate publisher for the human model, rviz is launched and runs smoothly without any drop if fps. Here we have the complete HDE visualization with the FTshoes wrench, human model tfs and human joint torques temperature messages. At this instant when another yarprobotstatepublisher is launched with icub model, rviz fps drops to 1 from 40 and the visualization is very very slow.

Possible problems
Possible solutions

Both the human and robot model urdf contains many links that do not have any visual properties i.e gyro or acceleromoter sensors. yarprobotstatepublisher provides transforms for these links too and they are not necessary for rviz visualization during pHRI. So, we can turn the unnecessary link tf visualization in rviz and see if this can improve the fps.

PS: When we shoot the video with two human models, we did not notice such a significant drop in fps of rviz. We should note down exactly how may tfs are present in the transform server in this case.

CC @lrapetti @claudia-lat @DanielePucci

yeshasvitirupachuri commented 5 years ago

We will investigate this issue further with @aerydna on Friday

DanielePucci commented 5 years ago

CC @traversaro and @diegoferigo

diegoferigo commented 5 years ago

The number of TFs might be an explanation.

I'm not an expert on this, but I remember that there is a timespan of validity of each TF. Are the TFs (with different timestamps) streamed until they are no longer valid? If this is true, it means that those 325 transforms might be duplicated, overloading RViz. This is just a wild guess.

yeshasvitirupachuri commented 5 years ago

After some initial investigation with icub in simulation (iCubGazeboV2_5 model) one obvious point realized is that yarprobotstatepublisher publishes transforms for all the frames, irrespective of their visual components. This includes frames of the sensors and other contact frames e.g l_hand_dh_frame or l_upper_leg_back_contact. These additional transforms do not add any visual component in Rviz as they have no geometry.

Screenshot from 2019-07-31 18-29-00

So, as discussed with @diegoferigo we should update yarprobotstatepublisher to ignore sensor frames through a parameter option to not stream sensor frame transforms to the transform server. This can be done through loading the urdf model and ignoring the sensors while parsing the model.

@lrapetti @traversaro @DanielePucci

traversaro commented 5 years ago

Ideally we should publish all static transforms on /tf_static, to avoid overloading the "normal" /tftopic, see http://wiki.ros.org/tf2/Migration#Addition_of_.2BAC8-tf_static_topic .

yeshasvitirupachuri commented 5 years ago

@traversaro if I understood correctly, the /tf_static topic has no time limits for the transforms. Do we need this feature to be present in yarprobotstatepublisher. If so we can open another issue. Concerning this issue, I believe loading a reduced model by tuning addSensorFramesAsAdditionalFrames flag off will already help with rviz visualization.

CC @aerydna

diegoferigo commented 5 years ago

What @traversaro meant is that the tf associated with the sensors does not change over time (under the assumption that it is computer wrt the parent link, and not the world frame). This is because if I understood correctly sensors frames are created on fake links connected with fixed joints to their parent.

All these relative TFs can be streamed to /tf_static which does not pollute the /tf with fixed transforms.

traversaro commented 5 years ago

Just to claritfy, I am totally ok in exposing the addSensorFramesAsAdditionalFrames in yarprobotstatepublisher. I wanted just to clarify that an alternative solution (probably more complicated) for the performance problem could be to use /tf_static as explained by @diegoferigo .

yeshasvitirupachuri commented 5 years ago

From what I observed yesterday the sensor frame tfs are also time varying. So, I believe they are not computed wrt to the parent and hence not static

traversaro commented 5 years ago

From what I observed yesterday the sensor frame tfs are also time varying. So, I believe they are not computed wrt to the parent and hence not static

This is because we publish the root_link --> sensor_frame directly. If we published the link_frame --> sensor_frame transform instead, it would be static and could be published on /tf_static.

yeshasvitirupachuri commented 5 years ago

Correct me if I am wrong, rviz needs all the tfs with respect to a single frame like root_link or ground right ?

yeshasvitirupachuri commented 5 years ago

Currently, yarprobotstatepublisher publishes the transforms (TFs) of all the frames present in a model passed as argument. This frames include also the frames that are associated with sensors e.g. skin sensor frame. These frames without any geometrical properties are not necessary for rviz visualization and we can safely ignore to send the TF associated with these frames from yarprobotstatepublisher. In case of iCub, the total number of frames (including the sensor frames) are 262 while the total number of links that have geometrical properties for rviz visualization are only 39.

I updated yarprobotstatepublisher with an addition optional boolean parameter --reduced-model. The default value is false and yarprobotstatepublisher streams TFs of all the frames present in the urdf model. If set to true, only the TFs of links are streamed. This will help to speed up rviz visualization.

I tested rviz visualization with the --reduced-model option set true and the visualization looks fine, even the sensor frames (without any geometrical properties) doesn't receive any transformations (TFs)

Screenshot from 2019-08-01 14-14-34

@lrapetti @DanielePucci

yeshasvitirupachuri commented 5 years ago

I tested it with the real robot and offline wearable data of xsens and FTShoes. The visualization is super smooth now at 31 fps!!! Using the --reduced-model option https://github.com/robotology/idyntree/pull/548, the current transforms count when running human and robot models 102.

@lrapetti and I will test it soon with online data from xsens suit.

@aerydna @lrapetti @DanielePucci

DanielePucci commented 5 years ago

Well done @Yeshasvitvs!

yeshasvitirupachuri commented 5 years ago

Rviz runs smooth now with online data also . Closing this issue.