isl-org / Open3D

Open3D: A Modern Library for 3D Data Processing
http://www.open3d.org
Other
11.56k stars 2.32k forks source link

How to extract the coordinate information of each voxel from the TSDF grid integrated through the RGBD images? #6598

Open OldLiu666 opened 10 months ago

OldLiu666 commented 10 months ago

Checklist

My Question

First, I used an Intel D435 camera to capture N sets of RGBD images and successfully completed TSDF integration using Open3D's reconstruction system. However, my situation is quite unique as I am working with the camera fixed in one place, while the object moves uniformly and slowly in front of the camera. As the object moves, the camera captures RGBD images. This process results in a TSDF that also includes a trajectory.log file. I modified the code in the reconstruction system to extract coordinates. My question is, do these extracted coordinates correspond to the first frame? If I want to calculate the coordinates for each frame, I would need to apply the first frame's coordinates to the matrix corresponding to each frame in the trajectory.log. Moreover, in my case, since the camera is stationary and the object is moving, I need to calculate the inverse matrix when using the matrix from the trajectory.log. The code for my coordinate extraction is as follows:

def scalable_integrate_rgb_frames(path_dataset, intrinsic, config): poses = [] [color_files, depth_files] = get_rgbd_file_lists(path_dataset) n_files = len(color_files) n_fragments = int(math.ceil(float(n_files) / config['n_frames_per_fragment'])) volume = o3d.pipelines.integration.ScalableTSDFVolume( voxel_length=config["tsdf_cubic_size"] / 1000.0, sdf_trunc=0.04, color_type=o3d.pipelines.integration.TSDFVolumeColorType.RGB8)

pose_graph_fragment = o3d.io.read_pose_graph(
    join(path_dataset, config["template_refined_posegraph_optimized"]))

for fragment_id in range(len(pose_graph_fragment.nodes)):
    pose_graph_rgbd = o3d.io.read_pose_graph(
        join(path_dataset, config["template_fragment_posegraph_optimized"] % fragment_id))

    for frame_id in range(len(pose_graph_rgbd.nodes)):
        frame_id_abs = fragment_id * config['n_frames_per_fragment'] + frame_id
        print("Fragment %03d / %03d :: integrate rgbd frame %d (%d of %d)." %
              (fragment_id, n_fragments - 1, frame_id_abs, frame_id + 1, len(pose_graph_rgbd.nodes)))
        rgbd = read_rgbd_image(color_files[frame_id_abs],
                               depth_files[frame_id_abs], False, config)
        pose = np.dot(pose_graph_fragment.nodes[fragment_id].pose,
                      pose_graph_rgbd.nodes[frame_id].pose)
        volume.integrate(rgbd, intrinsic, np.linalg.inv(pose))
        poses.append(pose)

mesh = volume.extract_triangle_mesh()
mesh.compute_vertex_normals()
vertices = np.asarray(mesh.vertices)
vol_origin = np.array([0, 0, 0])
voxel_size = config["tsdf_cubic_size"] / 1000.0
voxel_coords = get_voxel_coords(vertices, vol_origin, voxel_size)
combined_coords = np.hstack((vertices, voxel_coords))

df_combined = pd.DataFrame(combined_coords, columns=['x', 'y', 'z', 'voxel_x', 'voxel_y', 'voxel_z'])
combined_csv_file = join(path_dataset, "combined_vertices.csv")
df_combined.to_csv(combined_csv_file, index=False)
OldLiu666 commented 10 months ago

My situation is quite unique as the camera remains stationary while the object moves. I haven't found similar issues or code examples on GitHub. If you could provide any assistance, I would be extremely grateful.