autonomousvision / convolutional_occupancy_networks

[ECCV'20] Convolutional Occupancy Networks
https://pengsongyou.github.io/conv_onet
MIT License
835 stars 113 forks source link

Custom Dataset #32

Closed kajal-puri closed 3 years ago

kajal-puri commented 3 years ago

Thanks for open sourcing the code of ConvONet. This has been very helpful.

Regarding the custom Dataset, I was wondering if I want to run the ConvONet pre-trained models on a custom dataset, which format should I have it?

Currently, I have point clouds (.ply files) of multiple buildings/indoor scenes. Each point cloud is about 500 MB. Should I slice them into smaller point clouds or running on a large should be fine too? Since in the demo, I can see the input point clouds are much smaller (around 2 MB).

Please correct me if I'm wrong, but the output of the network will generate corresponding meshes of the point clouds as well as reconstruction of the whole point cloud (without noise) as well?

Thank you.

pengsongyou commented 3 years ago

Hi @kajal-puri

Sorry for the late reply and thanks for the interest!

Have you tried to run the Matterport3D demo code with python generate.py configs/pointcloud_crop/demo_matterport.yaml? We are able to handle the reconstruction of one entire building with multiple rooms. There are a few things:

  1. Does your point cloud have a real-world metric (i.e. meters). If so, you could just use the same 'unit_size: 0.02` as in the Matterport3D experiment. If not, you need to find a proper scale for your point cloud.
  2. What is the density of your point cloud? Since our model is trained on a specific density of the point cloud (check here, roughly 200k points for one 3-floor building), you might want to downsample/upsample your point cloud to meet such a density, or your results might not be decent.

Also, the output of the network is only the reconstructed meshes, and no point cloud output. Nevertheless, once you have the meshes, you can simply uniformly sample point clouds from the meshes using Trimesh.

Best, Songyou

kajal-puri commented 3 years ago

Thanks for the response @pengsongyou

Yes, I have run the code and reproduced the results using _demomatterport.yaml

  1. Yes, my point clouds have a real-world metric (in cms) so I will do as you have suggested.
  2. I am not sure how to calculate the density of my point cloud (as it is not known to me at this point). Do you have a particular way to calculate it (any related suggestions would be great). After checking the density, I would down/up sample it to match with your numbers.

One more question is that I have read in other open issue that custom datasets would also need to generate normals and occupancies? I already have "pointcloud.npz" but I have to generate "points.npz" i.e. occupancies. I'm refering to the code here but I have point clouds (and not meshes), this code seems to need mesh as an input (and not point cloud). Do you have any methodology that can generate occupancies for the point clouds as well? Let me know if I'm going in right direction.

pengsongyou commented 3 years ago

@kajal-puri

If you only want to use a pretrained ConvONet model to reconstruct from point clouds using generate.py, there is no need to have points.npz, pointcloud.npz is enough if I remember correctly. If it is not working, just copy any points.npz file from other processed dataset provided in this repo (like ShapeNet) and place it into your custom dataset.

Hope it helps.

Best, Songyou