Open kavidey opened 2 years ago
I got data from one of the depth cameras on Tahoma and was able to test the fully manual and more automated grasping algorithms. They needed a bit of tuning (most of the tuning was to resize and downsample the point cloud into the expected formats, the algorithms themselves worked pretty well without too many changes.
Blue is the fully manual algorithm (grey dots are where the user clicked)
Red is the automatic grasping algorithm (the user clicks on the center of the object and specifies a direction)
@kavidey thank you. Can we test that on the robot?
Even without running the code on the real robot, testing & tuning on example grasps would be extremely useful.
@mayacakmak mentioned that we want to focus specifically on bagged and deformable objects.