ADVRHumanoids / XBotGUI

XBot Graphical User Interface for XBot powered robots
3 stars 1 forks source link

3D Visual Perception Integration #34

Open dkanou opened 6 years ago

dkanou commented 6 years ago

I open this topic for discussion on the integration communication between the vision and the GUI for the connection with the other modules.

For now we need to add: -- A geometry_msgs::PoseStamped listener that receives a message named: grasp_pose

This message should be considered as the starting Interactive marker of the GUI for an object. Notice that the msg above may be in any frame (for now it is published to the /multisense/left_camera_optical_frame if there is not a tf tranformation to the /world_odom frame).

We will consider in the future to change or add on the top of this message.

alessandrosettimi commented 6 years ago

the topic of the clicked point will be /grasp_click

alessandrosettimi commented 6 years ago

the basic integration is done, I will leave this issue opened to add further integration steps.

dkanou commented 6 years ago

Question about the GUI regarding the vision. I want the following: for all the Tasks, we just want to use two clicks (/grasp_click), one for the wall and one for the handle and I publish the /grasp_pose ROS msg. We should have most of these, but can we check?

One more thing, because I do not remember. I publish the /grasp_pose topic, then you take it and you transform it to whatever topic name each task wants in the GUI right? E.g. debris it is called "/debri_pose". Or should we say to the Task people to use the /grasp_pose as their main topic to receive the poses?

Thanks!

alessandrosettimi commented 6 years ago

Question about the GUI regarding the vision. I want the following: for all the Tasks, we just want to use two clicks (/grasp_click), one for the wall and one for the handle and I publish the /grasp_pose ROS msg. We should have most of these, but can we check?

As I can see, now we have a single click interface. I can implement a two clicks interface easily, should this be the standard? I mean for every recognition you expect two clicks?

One more thing, because I do not remember. I publish the /grasp_pose topic, then you take it and you transform it to whatever topic name each task wants in the GUI right? E.g. debris it is called "/debri_pose".

Yes.

Or should we say to the Task people to use the /grasp_pose as their main topic to receive the poses?

No. They will interface with the standard interface, that does not depend on vision. This is the basic pipeline:

VISION -> INTERACTIVE_MARKER -> CONTROL MODULE

As an example, if you click Visual Percepetion Estimation in the extinguisher tab, with handle as selected object, the handle object will be updated using the vision feedback and the control modules listening to /hose_pose can have access to it.

dkanou commented 6 years ago

Great. To make things easier, we should always expect 2 clicks. Thanks!

alessandrosettimi commented 6 years ago

Can we give the user some information? Like, first click on the wall, or a background surface, and second click on the object?

dkanou commented 6 years ago

Yes, the first click should always be on the wall and the second on the object handle. If they fail they will need to click again 2 times, etc etc

alessandrosettimi commented 6 years ago

Ok I implemented the two clicks strategy in 13b1dd16e2e9019e7d47f38124ceebf45684c5c5

dkanou commented 6 years ago

Thanks Ale. Can you give me the sequence of execution for this? Also what we select for the topics, etc?

alessandrosettimi commented 6 years ago

So let's use the Hose task as an example.