Closed JiuTongBro closed 2 years ago
My apologies for the super delayed response!
In case you still need the pre-trained models, they can be found here.
LMK if you need more help.
Hi, sincerely thanks for your kindly reply and the sharing of model weights, but I still like to enquire that, as I noticed in your code, you transformed rendered images from linear space to sRGB space in the model.
So does that mean, those GT images rendered by blender using compositing nodes are in sRGB space, but not linear space? I am new to blender, and I found some notes in blender documents mentioned that rendered images generated by node composition in blender are in linear space: https://docs.blender.org/manual/en/latest/render/color_management.html
_'Sequencer The color space that the Sequencer operates in. By default, the Sequencer operates in sRGB space, but it can also be set to work in Linear space like the Compositing nodes, or another color space. Different color spaces will give different results for color correction, crossfades, and other operations.'_
I think you have the option to render both linear-space and sRGB images. As you said, it depends on how you set up the node tree.
Hi, in general NeRFactor is trully an outstanding and inspiring work.
However, when I run the code with the default scripts and settings you privided under nerfactor/, the result, especially the testing relighting result, is not that satisfied, as compared to the figures in your paper:
I set the 'ims' and 'imh' to 512 in all those experiments, is there any settings needs to be changed, like total number of iterations or learning rates, when run the code? Or is there anything else you suppose may lead to this performance?
Thanks!