Closed wbhu closed 1 year ago
To the best of our knowledge, the authors of Mip-NeRF360 have not provided an official implementation for NeRF-360-v2 dataset. Accordingly, we had difficulties on reproducing the score since minor details are not fully given in the paper. Here are several differences between ours and the original implementation.
For the rest of parts, we have followed the original code for reproduction. We expect the performance gap is caused by different camera normalization techniques or camera coordinate selection, which is not given in the original code.
Thanks for your reply.
"We expect the performance gap is caused by different camera normalization techniques or camera coordinate selection". I agree with it, because I checked the official code and also found this situation. And the the "original code" is realeased now. I have tested the training of it on the "room" scene. That's why I opened this issue.
BTW, https://github.com/kakaobrain/NeRF-Factory/issues/3 is also cauesed by the different camera normalization.
Oh, thanks for the information. According to the mentioned config, the authors refactored the image size. We'll reflect this new configuration on the next version.
Hi,
Thanks for the great work. I see from your tensorboard charts that the implementation of mipnerf 360 performs on par with the original jax version on indoor scenes, but there seems to be a huge gap with respect to the outdoor scene performances. Did you happen to find out the reason behind this?
Thanks again!
Hi,
Thanks for your awesome work. I evaluate MIpNeRF360 on the "room" scene of 360_v2 dataset, the PSNR result can match the orficial JAX implementation (even slightly better). But there are some 'floaters' artifacts in the results, which I think is related to the distortion loss. Could you please list some different implementation points comparied with the JAX one?