google-research / multinerf

A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF
Apache License 2.0
3.58k stars 339 forks source link

Using the Ref-NeRF codebase to reproduce the table in the paper. #32

Closed 78ij closed 1 year ago

78ij commented 1 year ago

Firstly, thanks for the authors' impressive work! I am currently trying to reproduce Table S6-S9 in the original Ref-NeRF paper(the shiny blender dataset). I am currently using the shinyblender config in the repository. However, when testing with the "ball' dataset, the results is not as good as the paper shows(precisely, the paper show a PSNR of ~47, but in my case, after 250000 iterations, the PSNR barely reaches 39 in training, and the images rendered look far from the GT). Really confused by the results. 1 2 3 4

jonbarron commented 1 year ago

Weird, it looks like it's not getting fully opaque.

Looking at our result in generate_tables, it looks like when we ran this code on this scene we also got 38 PSNR. I don't remember the results looking this bad visually though. Note that this code release isn't identical to what we used in the paper (this is basically a re-implementation of ref-nerf inside the mipnerf360 codebase).

This particular scene is tricky because the scene is easy enough that ref-nerf is able to get extremely low error rates (note that 47 and 38 PSNR are both extremely high numbers that correspond to extremely small MSE values). I think something small in the code is causing it to fall into a different local minimum across different runs. You might want to try tweaking the hyperparams to see if it falls into a different local minimum if you really care about the performance of this scene. Or I would just focus on a different scene, as this scene is extremely synthetic and "toy".

78ij commented 1 year ago

Weird, it looks like it's not getting fully opaque.

Looking at our result in generate_tables, it looks like when we ran this code on this scene we also got 38 PSNR. I don't remember the results looking this bad visually though. Note that this code release isn't identical to what we used in the paper (this is basically a re-implementation of ref-nerf inside the mipnerf360 codebase).

This particular scene is tricky because the scene is easy enough that ref-nerf is able to get extremely low error rates (note that 47 and 38 PSNR are both extremely high numbers that correspond to extremely small MSE values). I think something small in the code is causing it to fall into a different local minimum across different runs. You might want to try tweaking the hyperparams to see if it falls into a different local minimum if you really care about the performance of this scene. Or I would just focus on a different scene, as this scene is extremely synthetic and "toy".

Thanks for your quick reply. In fact, the metrics are indeed better in other scenes(eg. helmet, teapot), and the geometry is better too. I totally agree with you that this scene is a 'testing stub', which is not so important. I guess i should move to other complicated scenes (I observed that in NeRF context, simple scenes do not always mean easy learnt :( ) I am closing this issue, thanks again for your reply.

Dharmendra04 commented 10 months ago

Hi, where did you find this shiny blender dataset?, and which is the configuration used for shiny blender dataset from the following configuration files.

Screenshot 2023-09-09 at 14 09 21