zju3dv / OnePose_Plus_Plus

Code for "OnePose++: Keypoint-Free One-Shot Object Pose Estimation without CAD Models" NeurIPS 2022
Apache License 2.0
380 stars 46 forks source link

Object poses question in 1st stage of pipeline #23

Open jucamohedano opened 1 year ago

jucamohedano commented 1 year ago

Hi!

Thank you for sharing your great work!

I have trouble understanding the the first stage (Keypoint-Free Structure from Motion) of your pipeline. The input to it is a sequence of images and known object poses to produce a pointcloud of the object that we wish to estimate its pose. If I were to run this application in a robotics scenario, let's say robot grasping. How would I be able to obtain those "known object poses"? Otherwise, I think it it wouldn't be possible apply this first stage pipeline in a robotics scenario?

Other question that I have is whether those known object poses refer to the camera poses that's used to take the sequence of images of the object or not?

I hope you can resolve my questions :)

Thank you!

hxy-123 commented 1 year ago

Hi! The first stage of our pipeline is used to obtain an object point cloud from a set of reference images with known poses. Note that the known poses mean that images are in a pre-defined object canonical coordinate which is often with recovered real-scale (e.g. mm) as in previous object 6DoF pose estimation works. I think you can use our capture app (which will be available soon) to define the object's canonical coordinate and capture reference video (images) for your inference requirement in a robotics scenario.

zhirui-gao commented 1 year ago

Hi! Thank you for your great work! I have a doubt that why the images need poses as input in the first stage of the pipeline. In structure from motion(eg. colmap), it can recover the point cloud and camera pose synchronously.I am looking forward your early reply, thanks a lot!