Closed tteresi7 closed 3 years ago
Hello @tteresi7!
1) We followed Danielczuk et al [1] by sampling the camera intrinsics in the range centered on the Photoneo PhoXi S camera. From the experiment in our paper, we can observe that our model generalizes well on the various cameras (e.g. ASUS-PRO Xtion in OCID, Azure Kinect in robot experiment).
2) For simulation, we used BlenderProc [2], which supports the RGB-aligned depth image generation. Also, most RGB-D camera ROS drivers support the RGB-D alignment (typically topic named with depth_to_rgb
). You can follow the instructions on the Azure_Kinect_ROS_Driver
[1] Danielczuk, Michael, et al. "Segmenting unknown 3d objects from real depth images using mask r-cnn trained on synthetic data." 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019. [2] Denninger, Maximilian, et al. "Blenderproc." arXiv preprint arXiv:1911.01911 (2019).
Thanks so much
First, thanks for your contribution, this paper and method look great.
I was just curious what camera you used for this project and what method you use to align depth the depth images with RGB?