jiepengwang / NeuRIS

MIT License
227 stars 16 forks source link

About trans_n2w introduced in preprocessing of scannet. #9

Closed AlbertHuyb closed 1 year ago

AlbertHuyb commented 2 years ago

Hello @jiepengwang , thanks for your previous help. I'm still reading and learning about your code framework.

I'm currently confused by the normalization of point cloud into a sphere, which is conducted by get_norm_matrix_from_point_cloud() in utils_geometry.py. https://github.com/jiepengwang/NeuRIS/blob/7c085b2bd293e61ef7f8655fa1f13e733bbd5ba8/utils/utils_geometry.py#L375

In the preprocessing steps, it generates a trans_n2w matrix, which is then multiplied by the GT ScanNet poses in L238 of scan_data.py. https://github.com/jiepengwang/NeuRIS/blob/7c085b2bd293e61ef7f8655fa1f13e733bbd5ba8/preprocess/scannet_data.py#L238 Such multiplication makes the poses used in NeuRIS different from the GT ScanNet poses.

I wonder why we should normalize the point cloud into a sphere.

The reason why I wonder this question is that, I want to know the 3D locations of sampled points in NeuRIS. Specifically, the sampled points here refers to the variable pts in the render_core() function.

Without the normalization, pts in render_core() should correspond to their actual 3D locations. However, such a normalization on poses brings a mismatch.

jiepengwang commented 1 year ago

We follow the setting of IDR and NeuS to normalize the camera poses. My understanding is that normalization can help to train the models easier. Besides, with this normalization, you can still visualize the sampled points and check the relative positions of the sampled points and normalized GT mesh.

AlbertHuyb commented 1 year ago

Understood. Thanks for your explanation!