Open saikishor opened 3 years ago
Hi @saikishor, for PERCH 2.0, you only need to train an instance segmentation model. You can use any off the shelf model of your choice, though we have integrated with MaskRCNN in our work. The pose estimation part itself doesnt require training
That's great. Is there any documentation on the training part?. If so, could you share a link to that?.
I would also like to know how accurate is the pose estimation?. For instance, does it work for a distance greater than 1-1.5m?. (or) at what distance do you recommend it's usage for precise location.
I used this to train a segmentation model for the YCB Video dataset. The accuracy results for this dataset are published here. The accuracy with distance purely depends on the quality of point cloud at that distance. If the point cloud has sufficient points, PERCH 2.0 would be able to find a matching pose.
@aditya2592 sorry to bother you much, could you please point me to the part of pose estimation?, I want to see if I can use it directly. Because I have the segmentation done, or atleast I could purely use colors in this case, so I am interested more in the part of pose estimation.
Apologies for the late reply, did you try to setup the code after following the steps here? After that I would recommend :
sbpl_perception/src/scripts/tools/fat_dataset/fat_pose_image.py
@aditya2592 Thanks a lot for proper explanation
I was wondering if you could expand on "edit the code in fat_pose_image.py", since the file seems to contain a lot of code, and I am not sure where to start looking.
Thanks a lot!
How do we write a config file like the ones in sbpl_perception/config
for running the custom dataset?
Hi @ErinZhang1998 , I think a better idea instead of editing fat_pose_image.py
would be to write your own ROS service/client architecture that communicates with the C++ code
Hello @aditya2592 ,
By any chance, do you have a provision to train Perch 2.0 with a custom dataset?.If so how?. Please let me know.