google / nerfactor

Neural Factorization of Shape and Reflectance Under an Unknown Illumination
https://xiuming.info/projects/nerfactor/
Apache License 2.0
437 stars 56 forks source link

The pre-trained models and data provided are not sufficient to perform tests on the blender dataset #20

Closed chobao closed 2 years ago

chobao commented 2 years ago

I want to render the albedo, relighting results with pre-trained nerfactor on the blender dataset without further training. However, I find the pre-trained models and data provided are not sufficient to perform tests with test.py on the blender dataset. It requires shape_ckpt, brdf_ckpt and processed data(lvis.npy, xyz.npy, alpha.png, normal.npy) of each view which are not provided. So, does it mean I still need to do DataPreparation step and train shapemodel by myself ? Are pre-trained models provided useless?

xiumingzhang commented 2 years ago

The processed data are too large to be released, unfortunately, but your comment is fair; I can try releasing the shape and BRDF checkpoints, with which you will be able to generate lvis.npy, xyz.npy, alpha.png, normal.npy. Will that help?

chobao commented 2 years ago

It doesn't get any better than this. And I still have a question. Is only xyz.npy required if I want to perform test.py and render albedo with pre-trained nerfactor(including shape_ckpt, brdf_ckpt) ? normal.npy, lvis.npy and alpha.png is not necessarily prepared in advance and should be predicted by normal MLP and visibility MLP when test.

xiumingzhang commented 2 years ago

Yes, because unless you opt to take the NeRF shape as is (no further optimization on the geometry), normals and light visibility will be predicted by the trained model. Here's the line where the model predicts normals from xyz: https://github.com/google/nerfactor/blob/19651eb72af7f6174a4d9fb68c987047ba351980/nerfactor/models/nerfactor.py#L207