alexanderbergman7 / metanlrpp

Official Implementation for Fast Training of Neural Lumigraph Representations using Meta Learning.
41 stars 5 forks source link

How to preprocess datatsets #3

Closed svsambandam closed 2 years ago

svsambandam commented 2 years ago

Hello again :)

While I was able to load the preprocessed version of the DTU dataset that was provided, I was not able to get the NLR dataset to load when I run python experiment_scripts/train_sdf_ibr.py --config_filepath configs/nlrpp_nlr.txt; I get the following error

Will log into ./logs/nlr_test. Deleting previous logs in ./logs/nlr_test... Training Views: [16, 17, 18, 20, 21, 19]. Loading 22 image views... Initializing sphere SDF... Loading checkpoint from ./assets/base/sphere_sine256x5.pth (load_sdf=True, load_img_encoder=False, load_img_decoder=False, load_aggregation=False, load_poses=False). Starting training from scratch... Traceback (most recent call last): File "experiment_scripts/train_sdf_ibr.py", line 433, in main() File "experiment_scripts/train_sdf_ibr.py", line 402, in main model.precompute_3D_buffers() File "/scratch/soft/anaconda3/envs/metanlrpp/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "./modules_sdf.py", line 233, in precompute_3D_buffers self.precomputed_buffers = self.compute_3D_buffers(source_view_ids, grad=0, dataset_num=dataset_num) File "./modules_sdf.py", line 261, in compute_3D_buffers view = ray_builder_curr.img_dataset.frames[0].image_views[view_id] IndexError: list index out of range

When debugging this error occurs because the length of ray_builder_curr.img_dataset.frames[0].image_views is zero. When comparing the DTU and NLR datasets, I noticed the NLR datasets lack a corresponding "*_meta.py" file for each RGB image. Is there away to obtain these so that if I have any dataset of RGBs and masks, I can input it into the pipeline?

Thank you again for your help!

alexanderbergman7 commented 2 years ago

Sorry about this! Just uploaded our pre-processed version of the NLR dataset in the same Google Drive link as the preprocessed DTU dataset, and updated the README accordingly!

svsambandam commented 2 years ago

Would you be willing to share the code used to preprocess the data? I would like to run this on my own dataset as well :) Thank you!