Jittor / JNeRF

JNeRF is a NeRF benchmark based on Jittor. JNeRF re-implemented instant-ngp and achieved same performance with original paper.
Apache License 2.0
640 stars 74 forks source link

Data split on NeRF_synthetic #34

Closed kwea123 closed 2 years ago

kwea123 commented 2 years ago

On training, these two lines confuse me.

load train data
100%|██████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:03<00:00, 63.65it/s]
load val data
100%|████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:01<00:00,  7.09it/s]

Why do you have 200 training images and 10 val images? The original data has 100 training and 100 validation. What set do you use to get the reported 36.41 PSNR? The original paper uses only the 100 training images, not 100+100 val images

Gword commented 2 years ago

We did not find a description in the paper about whether the val dataset was used or not. We tried the official code and found that only with the train+val dataset for training, the psnr in lego can exceed the results of the paper. We will check it more carefully based on jnerf in the full Synthetic-NeRF.

kwea123 commented 2 years ago

In my repo the author says that they only use train data https://github.com/kwea123/ngp_pl/pull/1 And yes, somehow the code performance does not match the paper... https://github.com/NVlabs/instant-ngp/discussions/745