v4r-tuwien / grasping_pipeline

1 stars 4 forks source link

New strategy for detecting succesful/unsuccesful grasps #21

Open lexihaberl opened 2 days ago

lexihaberl commented 2 days ago

Problem: Currently, we determine the success of a grasp based on whether the gripper is fully closed. This works well for most objects, since the gripper doesn't fully close due to their thickness. However, this approach leads to false negatives with thin objects (e.g., sheets of paper), as the gripper may fully close even when the object has been successfully grasped.

We need an additional check to verify if the object has been removed after the initial gripper-closure check 'failed'.

Potential solutions discussed in the meeting:

jibweb commented 1 day ago

What is the current strategy?

lexihaberl commented 1 day ago

Sry, was interrupted by an unforeseen, sudden lunch break >.<

jibweb commented 1 day ago

Scene change is definitely full of pitfalls, if you even slightly touch the object (which is the most likely failure in my opinion, we don't usually fail by 30cm) it can fall apart. So I would be leaning toward the re-detection.

Alternatively, the in-hand camera could provide extra info / the robot could look at its gripper instead of the grasping area. In both cases, by moving slightly the gripper we would expect scene changes (either around the gripper and nowhere else when using the head camera, or far from the gripper from the in-hand camera that moves with the gripper). That could be a nice approach with minimal assumptions

dzimmer999 commented 1 day ago

I’ve been testing the performance of scene change detection, but I've run into a few hiccups along the way. The issue arises because the HSR checks whether the grasp was successful before returning to the "table" waypoint, meaning I need to compare the point clouds after grasping. Unfortunately, ICP wasn’t able to reliably transform these matrices. Instead, I tried using the known transformation from the camera to the map frame for both before and after grasping. This approach resulted in the correct orientation but an incorrect translation:

image

Has anyone else faced this issue? I suspect the problem might be related to the order of translation and rotation differing between ROS tf (as mentioned here) and how Open3D handles transformations (detailed here). If anyone has encountered this problem, I’d appreciate your insights. If I can’t resolve it, I’ll consider implementing JB's idea for re-detection. (I was also thinking of using the table plane along with the marker as a reference point to transform the point cloud in this manner.)

jibweb commented 1 day ago

I am confused how the robot translation could be so wrong, it's almost like it doesn't update its position at all during that grasping time :-/ It might be easier to only manipulate one of the two kind of transformation. Open3d has a function to convert from quaternion to rotation matrix, which can be combined with the translation in a 4x4 pose matrix, the input to the transform() function of open3d