autonomousvision / differentiable_volumetric_rendering

This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"
http://www.cvlibs.net/publications/Niemeyer2020CVPR.pdf
MIT License
798 stars 90 forks source link

how to create `pointcloud.npz` file? #28

Closed jay-thakur closed 4 years ago

jay-thakur commented 4 years ago

Hi,

I am trying to created pointcloud.npz for my own data. I load it & can see there are points, normals, loc & scale.

point_cloud_data = np.load("pointcloud.npz", allow_pickle=True)
for point_cloud_data_content in point_cloud_data_files:
    print(point_cloud_data_content, np.shape(point_cloud_data[point_cloud_data_content]))

output -

points (100000, 3)
normals (100000, 3)
loc (3,)
scale ()

Could you please explain these & how to create pointcloud.npz file.

Thanks, Jay

m-niemeyer commented 4 years ago

Hi @jay-thakur , thanks for your interest in the project!

First, we use the pointcloud.npz files only for evaluation. We compare samples from our predicted mesh against these ground truth point clouds and report the Chamfer distance in the main paper.

We created them by

  1. sampling 100k points from the GT mesh and saving these under "points".
  2. getting the corresponding normal vectors from the GT mesh and saving them under "normals".
  3. Now loc and scale are a little different. The GT ShapeNet models are not always exactly in the unit cube. However, for training, we want all models to be exactly in the unit cube. We therefore save the mid point of the GT mesh's bounding box as "loc" and the longest length of the bounding box as "scale". We use these values to scale the mesh vertices exactly to the unit cube via v_scaled = (v - loc) / scale. This way, the scaled mesh is fully contained in the unit cube centred at the origin. The sampled points from 1. are already sampled from the scaled mesh.

I hope this helps a little.