Closed Stephanie-ustc closed 2 years ago
Hi,
Technically the input image is not normalized twice:
The reason for this is we wanted to better utilize ImageNet pretrained weights by normalizing the inputs as is done in the pretraining, without having to "unormalize" the ground truth image to be within [0.0, 1.0] for further visualization and loss computation (for the small LLFF dataset we pre-processe the dataset in advance and keep the images in memory).
My apologies to the confusions. Hope this helps.
Zijian
Oh,you're right. I'm sorry I didn't see it clearly.Thank you for your reply.
Hi, Image normalization is realized by "img_transforms" when loading image in function of "nerf_dataset.py" . Why normalize the input image again in "ResnetEncoder forward step" ???