Open fishbotics opened 4 weeks ago
Hi, this is a good question, as I am also interested in how our effect benefits robotic learning. So please feel free to ask me any questions about this.
Now, go back to your current question. Although we haven't tried this previously. But based on my experience, "training this model using complete point clouds and evaluating it on incomplete point clouds" definitely can work.
Assume that you have some open-source dataset with labelled full scenes and want to do inference on some RGB-D frame-level point cloud. You can try these to make the performance stronger:
I think these try can make a stronger performance in the case you described. But I believe, without any modification, the currently existing config is also good enough for some initial try.
Hi,
Thank for you sharing this paper and repo.
I'm wondering if you have run any experiments where you have trained this model using complete point clouds and evaluated it on incomplete point clouds. I would like to use your work as the backbone for robotic policy learning, but am trying to understand the feasibility of training on fully observed meshes (from a simulator) and evaluating with partially observed scenes (from a depth camera).
Thanks a lot for your help and guidance!