Open jjpatino-byte opened 5 months ago
Thanks for your question. The point cloud obtained from the sensor should be transformed to the global frame (where Z+ is the normal vector of the horizontal plane, e.g., the tabletop or the ground) since the output from our pre-trained network is constrained to a predefined view space in the global frame.
Taking into account that the pointcloud should be transformed to the global frame. Does it matter where the object center (which is the same as the hemisphere center) is with respect to global frame?
Thanks for your question. The point cloud obtained from the sensor should be transformed to the global frame (where Z+ is the normal vector of the horizontal plane, e.g., the tabletop or the ground) since the output from our pre-trained network is constrained to a predefined view space in the global frame.
Hi! I am wondering how to get the input Octomap (M). Should it be created from the obtained pointcloud with respect to the sensor frame, or should they be transformed to the global frame (e.g. robot base frame)?