Waley-Z / ios-depth-point-cloud

Using ARKit and LiDAR to save depth data and export point cloud, based on WWDC20-10611 sample code
50 stars 6 forks source link

Issue with JSON File Processing and Point Cloud Generation #2

Closed masayaando-lefixea closed 7 months ago

masayaando-lefixea commented 1 year ago

Hello,

I am trying to generate a point cloud by reading the JSON files produced by your code. I attempted to perform coordinate transformations using Python, but it didn't work as expected. I am wondering if you could verify whether my calculation logic is correct or not.

Here is the source code I used:

import json
import numpy as np

# read file
with open('178537.2274115_5.json', 'r') as f:
    data = json.load(f)

    # read depth map
    depth_map = np.array(data['depthMap'])

    # read camera params
    camera_intrinsic_inv = np.array(data['cameraIntrinsicsInversed']).squeeze()

    # convert to camera
    points = []
    for i in range(len(depth_map)):
        for j in range(len(depth_map[0])):
            depth = depth_map[i][j]
            point = np.dot(np.array([i,j, 1]), camera_intrinsic_inv) * depth
            points.append(point)

    # convert to world
    camera_transform = np.array(data['cameraTransform']).squeeze()
    points_world = []
    for point in points:
        point_world = np.dot(camera_transform, np.append(point, 1))
        points_world.append(point_world)

    # save
    with open('pointCloud.xyz', 'w') as f:
        for point in points:
            f.write(f'{point[0]} {point[1]} {point[2]}\n')

Any help would be greatly appreciated.

Thank you.

LeyangWen commented 1 year ago

Hi,

I did sth similar, but with only 8 keypoints per frame. I used the LocalToWorld value for camera projection and it works for me. Here is my implementation: PhoneLidar.project_kps

masayaando-lefixea commented 1 year ago

Thank you. I'll check out your project