Open chenkang455 opened 12 months ago
https://github.com/yenchenlin/nerf-pytorch/assets/72788314/b271f164-a8c6-4c6e-9364-727466ba4fc4
And this is the rendered video, i wonder why the low part is perfect while the high part is almost black, could it be the reason that i comment the white regularization ?
# if white_bkgd:
# rgb_map = rgb_map + (1.-acc_map[...,None])
Alternatively, there is another possibility that the issue could be due to my own Lego dataset. When generating the data, I did not follow the official data provided, but instead used Blender to create it. For example, in this image, my training data was generated by rotating the camera around the bottom, while the test data was generated by rotating the camera around the top. Could the black region in the upper half be a result of the lack of information from the camera below or the camera above?
I adjusted the white background of the Lego to black during training, and encountered this issue in the training process. What could be the possible cause? (Here, I commented out the "white_bkgd" part.)