thucz / PanoGRF

[NeurIPS2023] PanoGRF: Generalizable Spherical Radiance Fields for Wide-baseline Panoramas(or 360-degree image)
https://thucz.github.io/PanoGRF/
MIT License
23 stars 0 forks source link

Datasets #3

Open Robot-zeg opened 1 month ago

Robot-zeg commented 1 month ago

I downloaded the preprocessed dataset(baselines_data.zip) from https://drive.google.com/file/d/1RU7EH8SuS0jVbRj-Y4I1KASauoMg5Rcs/view?usp=drive_link. I obtained the file structure as shown in the picture.

  1. How to divide the training and testing sets?
  2. I am confused about the file name. (Matterport3D: 1.0, 1.5, and 2.0 meters and Replica: 1.0 meters in the paper.) image
thucz commented 1 month ago
  1. It doesn't contain the training data I used for PanoGRF. mp3d is preprocessed for my baselines(NeuRay, IBRNet). Replica is also used for testing in PanoGRF and my baselines(NeuRay and IBRNet).

If you want to get the training data, follow the steps in the README.md

  1. The number means the distance between the training view and the middle test view, not the number of camera baseline in paper, which means the distance between the training views. In our experiments, replica_0.5 corresponds to 1.0meter in the paper.
Robot-zeg commented 1 month ago

Thank you very much for your reply! I found the code for preprocessing Replica and Residential in dataset. Where is the preprocess code for MP3D datasets? Thank you!

thucz commented 1 month ago

See the guidance in the first part of README.md. There is no unique script for preprocessing MP3D. The preprocessing code is fused in the dataloader file for 360 MVS depth and monocular depth estimation.