Closed OceanYing closed 1 year ago
Thanks for sharing this! This could definitely help others solve the camera convention issues. In NeurMips, we follow the Pytorch3D convention, and more details could be found here: https://github.com/zhihao-lin/neurmips/blob/main/doc/dataset.md So if your dataset provides camera poses in other convention (e.g. OpenCV, OpenGL), please make sure to transform the camera poses and point cloud to the correct and shared coordinate system.
Hi, thanks a lot for releasing the code of the impressing work! I got relatively good results on the data provided by the authors. However, when it comes to my self-prepared data, it failed to show reasonable performance. The result images are significantly distorted and blurry. In the following image, the left two are GT, and the right two are results of NeurMips.
After a period of struggling, I found that the coordinate system is not consistent between the NSVF camera poses and the input point cloud (provided by the authors). For the camera convention of NSVF, the camera pose matrix corresponds to (right, down, front). However, the pointcloud will be upside down when projected onto images.
This means there should be a transformation between the NSVF poses and pointcloud. I found the y-axis and z-axis should be flipped:
Actually, I didn't find this explicit transformation in the codebase. My solution is simply adding this axis convertion to my pointcloud in the dataloader. Then it works!
This phenomenon can be checked by the following code:
I hope this issue will help others run the algorithm easier on their own dataset.