Open mizeller opened 1 year ago
Hi, could you provide more details? The triangulation phase need images cropped by bbox (which is project from 3D to 2D). And moreover, after the crop, image intrinsics are needed to be adjusted accordingly, as in our demo code. Please check these operations are performed properly. As for the Box.txt which is the output of our capture app, it will be converted to 3D coordinates of 8 corners of annotated 3D bounding box. Therefore, I think you can directly obtain these coordinates by Blender and skip the conversion from Box.txt in our demo code by a little bit hacking :).
Thank you for the answer.(@mizeller and I are working on the same project. ) By directly extracting transformation matrices ("T_wc" and "T_oc") from blender and a little bit hacking, we could crop images without making "Box.txt" and "ARposes.txt" and the OnePose++ pipeline worked successfully😊
@Maemaemaeko would you kindly share your pipeline for making new objects?
Sure. This is our repository. https://github.com/mizeller/Monocluar-Pose-Estimation-Pipeline-for-Spot We use regular rotation matrices instead of quaternions.
Hello, have you solved the problem with Box.txt? Can you tell me the relevant solution
Please refer to our small fix in parse_scanned_data.py. This might help you.
Hey I'm trying to use OnePose++ to estimate the pose of a novel object for which I created some synthetic data with Blender. I have this data in the standard BOP Format. I converted my data from BOP to the format used in your demo data. It is almost working, but currently the images that are cropped are completely wrong and sometimes the triangulation finds 0 matches because I used the same Box.txt from your demo data (which obviously makes no sense).
I couldn't figure out is how to get the Box.txt file correctly. Any help in that regard would be much appreciated!
Thanks in advance :-)