Tools to create pixel-wise object masks, bounding box labels (2D and 3D) and 3D object model (PLY triangle mesh) for object sequences filmed with an RGB-D camera. This project prepares training and testing data for various deep learning projects such as 6D object pose estimation projects singleshotpose, as well as object detection and instance segmentation projects.
Hi, I got a issue about the pointcloud I created after processing register_scene.py.
Here is the model I got:
The rgb data I used in ./JPEGImages are like below:
The whole textures and other things seem reasonable. I guess that is because some planes are caculated wrong when generating the whole point cloud, but I can't tell why exactly. Can you please help me work it out?Will it be possible that I didn't use the colorprint of makers?
My intrinsics.json is as below:
{"fx": 486.93841979, "fy": 491.66013004, "height": 512, "width": 704, "ppy": 271.29438632, "ppx": 333.28631074, "ID": "620201000292", "depth_scale": 0.001}
Hi, I got a issue about the pointcloud I created after processing register_scene.py. Here is the model I got:
The rgb data I used in ./JPEGImages are like below:
The whole textures and other things seem reasonable. I guess that is because some planes are caculated wrong when generating the whole point cloud, but I can't tell why exactly. Can you please help me work it out?Will it be possible that I didn't use the colorprint of makers? My intrinsics.json is as below:
{"fx": 486.93841979, "fy": 491.66013004, "height": 512, "width": 704, "ppy": 271.29438632, "ppx": 333.28631074, "ID": "620201000292", "depth_scale": 0.001}