Open LinLin1031 opened 11 months ago
May I ask if you have solved the problem? I also don't konw how to proj 3D into 2D image. Can you share the method with me?
I also meet this problem. May I ask how to establish the pose_matrix from "pose.json" so that the point cloud can be transformed to global coordinate?
I also meet this problem. May I ask how to establish the pose_matrix from "pose.json" so that the point cloud can be transformed to global coordinate?
Following the Equation in #16, I successfully construct the rotation matrix to align the generated point cloud using images in data folder using "final_camera_rotation" and "camera_rt_matrix". However, I still cannot find the translation vector. I tried "camera_location" and "camera_rt_matrix" but it is off. Could anyone teach me how to find the translation vector. Great Thanks!
I also meet this problem. May I ask how to establish the pose_matrix from "pose.json" so that the point cloud can be transformed to global coordinate?
Following the Equation in #16, I successfully construct the rotation matrix to align the generated point cloud using images in data folder using "final_camera_rotation" and "camera_rt_matrix". However, I still cannot find the translation vector. I tried "camera_location" and "camera_rt_matrix" but it is off. Could anyone teach me how to find the translation vector. Great Thanks!
I solved this problem! This is because the depth is saved in 16 bits and should divide 512 before translating. I also summarize some common issues when mapping the 2D to 3D global coordinate.
Hope it might help! And really thanks for the great work of the team to construct this dataset!
Thanks!
I am facing issues with some scenes in area_5. Specifically in area_5a, office_20 I find that some images are aligned correctly with 3D while some other are at a 90 degree rotation. I am using the files from data
folder
S3DIS Pointcloud
Unprojected Pointcloud
@Geniusly-Stupid could you please check if it is ok on your side?
Also, for poses I find that following gives exact same results as the equation described in the above comment:
camera_rt_matrix = data["camera_rt_matrix"] # camera pose
camera_rt_matrix.append([0, 0, 0, 1]) # 4x4 homogen array
camera_rt_matrix = np.linalg.inv(np.array(camera_rt_matrix))
Other bug reports:
camera_rt_matrix = np.array([
[0, 1, 0, -4.10],
[-1, 0, 0, 6.25],
[0, 0, 1, 0.0],
[0, 0, 0, 1]
]) @ camera_rt_matrix
Hi @ayushjain1144 , I'm also facing the same problem. Have you found the solution yet?
Hi, Not really. My understanding is that it's not the issue of alignment, but the depth/color images in raw folder do not exhaustively cover the entire room. One hack that helps is to look for images in other rooms of a particular area and see if there is an image in other room that overlaps with current room and include that as well (For that, I unproject each image from other room, and check if there are any points which are very close to the provided S3dis pointcloud. If so, I add that image to a particular room. I haven't tested this exhaustively, so not very sure if this would be helpful or not.
Hi @ayushjain1144 , did you use the V1.2_aligned version or the V1.2 version of the 3D point cloud?
The normal one (un-aligned)
Can you tell me how to find the coordinate relationship between 2D image and 3D point cloud by camera pose?