Closed ayushjain1144 closed 1 year ago
Hi @ayushjain1144,
Thank you for your interest!
To transform the ScanNet poses before using it with PyTorch3D, we used the following function: https://github.com/stefan-ainetter/SCANnotate/blob/212234a46763032d2ff3bcd26150a39ad4292229/retrieval_pipeline/load_ScanNet_data.py#L55
Here, the 'meta_file_path' is provided by ScanNet (e.g. scene0000_01.txt for scene0000_01), 'pose_path' points to the directory where the pose files are located, and 'idx' is the index of the specific image.
For the intrinsics, you can directly use the provided parameters from ScanNet, no need for processing here.
You can then use pose/intrinsics with PyTorch3D, e.g. as we did to initialize our renderer in this function:
According to the point rasterizer you mentioned above: This code was only used for initial experiments, and I am not sure if it works properly. I would suggest to use the official PyTorch3D impelmentations for point rasterizer etc.
Hope this helps, let me know if you need additional information.
Thank you so much, this is very helpful! We will try it and get back to you if we face any more issues. (Happy to close the issue myself in 4-5 days if we don't have further questions)
Hi,
Thank you so much for your wonderful work!
We are trying to use pointcloud rendering code for scannet from your repository. We found the point rasterizer and SfMPerspectiveCamerasScanNet but I don't fully understand the connection between inputs to these classes and scannet extrinsics. Could you let us know what processing you do over poses/intrinsics from scannet before passing to these functions? Pytorch3D's documentation has been quite confusing, any help from you would be very appreciated!