half-potato / nmf

Our method takes as input a collection of images (100 in our experiments) with known cameras, and outputs the volumetric density and normals, materials (BRDFs), and far-field illumination (environment map) of the scene.
https://half-potato.gitlab.io/posts/nmf/
MIT License
53 stars 3 forks source link

[Question] Reproduction of the reported scores in the paper #5

Closed ChemJeff closed 1 year ago

ChemJeff commented 1 year ago

Hi, I have recently found this excellent work and tried a demo with car scene from the Shiny Blender dataset. However, the resulting evaluation scores are lower (with a large margin) than reported in the paper (v2 on arXiv): PSNR: 29.61 (this score from the saved mean.txt; in the paper: 30.28) SSIM: 0.9448 (this score from the saved mean.txt; in the paper: 0.951) MAE: 8.0223 (this score calculated on saved normal maps; in the paper: 2.598)

The command I used: python train.py -m expname=EXPNAME model=microfacet_tensorf2 dataset=car vis_every=5000 datadir=DATADIR model.arch.bg_module.bg_path=backgrounds/forest.exr

I hope to hear your response soon, thanks.

half-potato commented 1 year ago

Hi, sorry for the delay. I think something is wrong with how the metrics are being computed and I’m not sure why. One of the notebooks named recompute metrics fixes the ssim, lpips, and psnr. The other named something about normals fixes the MAE. The details are in the readme.

half-potato commented 1 year ago

Currently investigating the cause of this weird extra bit under the car. Not sure when this started happening... image

half-potato commented 1 year ago

Turns out the default config was incorrect. The fresnel mixing mode should be set to fresnel, not lambda.

I've included this change in the latest version.