gulvarol / surreal

Learning from Synthetic Humans, CVPR 2017
http://www.di.ens.fr/willow/research/surreal
Other
588 stars 106 forks source link

How to get 3D coordinates from depth maps #27

Open cherryjm opened 5 years ago

cherryjm commented 5 years ago

Hi, thank you so much for this dataset! But I have a question. I want to transform depth maps to 3D point clouds, i.e. calculate the 3D coordinates according to depth maps. I try to inverse the process in https://github.com/gulvarol/surreal/tree/master/datageneration/misc/3Dto2D but the result seems wrong, especially for the x coordinates. The image below shows some of my results, joint3D is calculated from depth maps and real3D is the ground truth. image

How can I get the right 3D coordinates from depth maps? Thanks a lot!

cherryjm commented 5 years ago

image This is the 3D-to-2D function you offered. Below is the inverse function I wrote.

image I wrote this function to transform depth map into point cloud, but the result seems wrong. Could you please tell me what's wrong with it? Thanks a lot! @gulvarol

gulvarol commented 5 years ago

Sorry that I currently don't have time to check this. Unless you need this conversion for particular reason, you can also generate the point cloud from the SMPL parameters.

cherryjm commented 5 years ago

Hi @gulvarol ! Sorry to bother you. But I want to know whether the depth maps are ground truth?

def project_vertices(points, intrinsic, extrinsic):
    homo_coords = np.concatenate([points, np.ones((points.shape[0], 1))], axis=1).transpose()
    proj_coords = np.dot(intrinsic, np.dot(extrinsic, homo_coords))
    proj_coords = proj_coords / proj_coords[2]
    proj_coords = proj_coords[:2].transpose()

In line 4, proj_coords[2] is the depth value of the corresponding pixel, and I compared it with the corresponding element in depth map, finding they are not equal. Did I make mistakes? Or, are there problems with depth maps? How do you get depth maps? (Are they gt?)

legoc1986 commented 4 years ago

@cherry-ing Hi, Sorry to ask, do you solve the problem? I am interested in the same question. Thanks

hanabi7 commented 4 years ago

I have the same problem when trying to reconstruct the point clouds from the depth image. The given depth value of the depth maps don't seem to be able to reconstruct the corresponding point clouds. Can you tell me who do you generate the depth value of the depth image? It would be of great help.

cherryjm commented 4 years ago

Hi @legoc1986 @hanabi7 , I have solved this problem. I thought the error may come from a false projection process. I transform the joint coordinates to the camera coordinate by using extrinsic params, and project depth image to point cloud in the camera coordinate too. The error between them is negligible. Here is the transform function. def pixel2world(x, fx, fy, ux, uy): x[:, :, 0] = (x[:, :, 0] - ux) * x[:, :, 2] / fx x[:, :, 1] = (x[:, :, 1] - uy) * x[:, :, 2] / fy return x joint_cam = np.dot(R, joint_wrd) + T def pixel2world(x, fx, fy, ux, uy): x[:, :, 0] = (x[:, :, 0] - ux) * x[:, :, 2] / fx x[:, :, 1] = (x[:, :, 1] - uy) * x[:, :, 2] / fy return x

hanabi7 commented 4 years ago

Can you tell me whats your parameters used in the point clouds generation? I set the ux = 160, uy=120, fx=fy=600, but the result seems to be wrong? I wonder whats your settings?

hanabi7 commented 4 years ago

I got the settings in the intrinsic parameters, i test it with joints2d, and successfully transform them into joints3d. But i can't be able to generate the point cloud from depth image correctly.

cherryjm commented 4 years ago

Actually, it is not possible to get the exactly same joint coordinates as GT using 2D coordinates and the corresponding depth value. I tested before and found that there would be around 100 mm projection error. Here is my code that transforms joints to uvz coordinate.

            clip = './SURREAL/train/run2/ung_143_20 ung_143_20_c0001'
            depth_path = osp.join(seq_dir, '%s_depth.mat' % clip)
            info_path = osp.join(seq_dir, '%s_info.mat' % clip)

            depth_mat = loadmat(depth_path)
            info_mat = loadmat(info_path)
            camLoc = info_mat['camLoc']
            R, T = getExtrinsicBlender(camLoc)
            joints3D = np.array(info_mat['joints3D'], dtype=np.float32)
            joints2D = np.array(info_mat['joints2D'], dtype=np.float32)

            frame = 0
            joint_wrd = joints3D[:, :, frame]  # size = 3*24
            joint_cam = np.dot(R, joint_wrd) + T
            joint_cam = np.asarray(joint_cam.T, dtype=np.float32)  # size = 24 *3
            joint_uvz_0 = world2pixel(joint_cam, fx, fy, ux, uy)

            joint_img = joints2D[:, :, frame].T  # size = 24 * 2
            depth = depth_mat['depth_%d' % (frame + 1)]  # size = 240*320
            depth = np.array(depth, dtype=np.float32)
            joint_dep = np.zeros((24, 1))
            for i in range(24):
                joint_dep[i, 0] = depth[int(joint_img[i, 1]), int(joint_img[i, 0])]
            joint_uvz_1 = np.concatenate([joint_img, joint_dep], axis=1)

            for i in range(24):
                print(joint_uvz_0[i], joint_uvz_1[i])

The outputs are:

[167.68698 117.64172 7.0783534] [168. 118. 6.90605021] [165.18924 125.439285 7.1470485] [165. 125. 6.9035368] [169.31161 125.63664 7.0103216] [169. 126. 6.89020109] [170.67014 108.776596 7.0874996] [171. 109. 6.81586075] [159.09073 153.66634 7.1892943] [159. 154. 7.14086771] [165.52849 155.60997 6.9424872] [166. 156. 6.92339468] [170.92065 97.36165 7.0938983] [171. 97. 6.86561155] [166.04419 182.59698 7.247515] [166. 183. 7.2692976] [172.38426 186.2471 6.9712205] [172. 186. 6.97018576] [167.12862 93.93854 7.080801] [167. 94. 6.92030621] [156.54616 186.69728 7.251212] [157. 187. 7.25077009] [165.74718 189.65552 6.8775444] [166. 190. 6.88435936] [169.14722 75.64205 7.0963736] [169. 76. 7.0234704] [166.7689 84.02794 7.1664724] [167. 84. 6.94769287] [171.39734 82.87882 7.0186005] [171. 83. 6.89617825] [164.84798 70.53702 7.0912404] [165. 71. 7.05881977] [164.46712 87.58613 7.2442036] [164. 88. 6.95024014] [173.19832 84.69912 6.929118] [173. 85. 6.88130617] [164.25957 106.75889 7.312258] [164. 107. 6.94491911] [175.85 103.85176 6.863737] [176. 104. 6.83457756] [156.72845 123.494095 7.291042] [157. 123. 6.83838511] [162.93686 119.341675 6.8429003] [163. 119. 6.82275772] [156.01503 130.04028 7.291074] [156. 130. 7.10060978] [159.02702 125.06448 6.850731] [159. 125. 6.8457365]

hanabi7 commented 4 years ago

I have already solve the problem. There are some error on the depth values on the depth image, which cause my bug. But i know how your projection error accurs. The depth value on the depth image stands for the depth value of the cooresponding surface point. But when you are trying to get the depth value of a 2d joints, what you really get is the surface point depth rather the joint depth, as joints are inside the human body model. for instance, you get about 0.1mm errors on joint 0 (belly), because belly joint is in the middle of the human model, and human model is about 0.2mm wide. for joint 23(hand), the last joint, the error is really small, because human hand is really small.

hanabi7 commented 4 years ago

A picture of my reconstruction. final_compare

QinTW commented 4 years ago

@hanabi7 ,Hi,I'm sorry to bother you,But I have the same problem when trying to reconstruct the point clouds from the depth image,I see that you have solved the problem,can you tell me the specific method or provide the source code?I would be very grateful.

axhiao commented 4 years ago

@hanabi7 could you provide a demo code ? Really appreciate that!

xiaofanglegoc commented 3 years ago

Hi @gulvarol I found that reconstruct depth image to world coordinate system. The output is not aligned with SMPL vertices. For example, file run0/27_08/c0002_info.mat frame 33, the blue is smpl output vertices, and red is reconstructed point cloud. image

But if I convert smpl vertices and point cloud to camera coordiante system, they are aligned: image

Could you please share some details on this? The results are inspired by your provided codes here: https://github.com/gulvarol/surreal/blob/master/datageneration/misc/smpl_relations/smpl_relations.py