Closed evenhax closed 3 years ago
You should provide the point cloud, corresponding camera parameters, and RGB images for training. Our implementation is based on the scannet and matterport 3D, so you can provide the camera parameters like the two datasets, and ensure your point cloud is associated with your camera parameters. More information can be found in the ‘pre_processing/voxelizationaggregation*.py’.
Thank you very much! I will have a try.
For example, I want to use point cloud from Megadepth or built by myself. What should I do?