Open linghai06 opened 2 years ago
Hi thanks! In order to apply our method to customized image dataset, you should probably follow the steps below:
Run COLMAP dense reconstruction on the image set (reference: https://colmap.github.io/faq.html#reconstruct-sparse-dense-model-from-known-camera-poses) After this step, you should have the output (format is described in https://colmap.github.io/format.html#output-format) (1). camera intrinsics (camera.txt) (2). camera extrinsics (rotation & translation) (images.txt) (3) point cloud (points3D.txt)
Transform the camera model to PyTorch3D convention (details in https://github.com/facebookresearch/pytorch3d/blob/main/docs/notes/cameras.md)
Customize your dataloader. Generally, the dataloader is similar to the one for Tanks&Temple (neurmips/mnh/dataset_tat.py)
Tune some hyperparameter for initialization Tune the plane number and plane size by visualizing the planes & points geometry (https://github.com/zhihao-lin/neurmips/blob/1421970b9143f2da897adeed87354f9bbcfb1ce5/mnh/utils_vedo.py#L92) The principle is to make planes just big and more enough to cover most of the points
Hope this can help you.
Hi, nice work! I wonder if there is any chance to train on custom datasets, such as a sequence of pictures taken with a handheld camera? How to prepare them for training if there is? Thank you.