alievk / npbg

Neural Point-Based Graphics
MIT License
324 stars 51 forks source link

How to fit on a ScanNet scene? #2

Closed BostonLobster closed 3 years ago

BostonLobster commented 4 years ago

Each ScanNet scene contains RGB-D images, so I can project 2D pixels to 3D point cloud and save them to a .ply file. But how to modify path_example.yaml and train_example.yaml to fit the descriptors on this ply file?

Any guidelines or suggestions?

seva100 commented 4 years ago

Most likely, you will only need to modify the path_example.yaml. The syntax is:

datasets:
    "your_scene_name":
        scene_path: your_scene/your_scene_config.yaml    # path to your scene config -- needs to be filled in separately
        target_path: your_scene/images_undistorted    # path to undistorted images of your scene
        target_name_func: "lambda i: f'{i}.png'"    # lambda which defines the file name format for ground truth picture #i in target_path

You'll also need to create your_scene/your_scene_config.yaml mentioned above. The structure is:

viewport_size: [2160, 3840]    # width, height of your target images
intrinsic_matrix: path/to/scene/intrinsic.txt
view_matrix: path/to/scene/view_matrices.txt
pointcloud: path/to/scene/point_cloud.ply

You can look at some examples by downloading the sample data provided with the repository. Hope this helps.

Larry-u commented 4 years ago

@seva100 Thanks for your reply! I figure out how to achieve that now. But I have another question: in the readme, the metashape_build_cloud.py produces a point_cloud.obj, but in the sample data, the pointcloud file is pointcloud.ply, is there any script to convert between them?

seva100 commented 4 years ago

@Larry-u actually metashape_build_cloud.py should produce a .ply file (see the line 83), so it should be working out of the box.

I understand it was confusing since it was said in README that the script produces on .obj file, so I've just fixed that in the README.