I am really grateful for a flexible pytorch implementation of NGP available here
I have been running NeRF for the last couple of weeks from this repo.
I found some inconsistencies in the near and far values.
What I found is that the aabb value used here defaults to [-1, -1, -1, 1, 1, 1] irrespective of the bound value.
However, in the __init__ function of NeRFRenderer module, the aabb_train and aabb_infer values are actually [-bound, -bound, -bound, bound, bound, bound].
This behaviour is inexplicable and I don't understand why the values would change between __init__ and the run function. Could be it due to the use of self.register_buffer function?
I am really grateful for a flexible pytorch implementation of NGP available here
I have been running NeRF for the last couple of weeks from this repo. I found some inconsistencies in the
near
andfar
values. What I found is that theaabb
value used here defaults to[-1, -1, -1, 1, 1, 1]
irrespective of the bound value.However, in the
__init__
function ofNeRFRenderer
module, theaabb_train
andaabb_infer
values are actually[-bound, -bound, -bound, bound, bound, bound]
. This behaviour is inexplicable and I don't understand why the values would change between__init__
and therun
function. Could be it due to the use ofself.register_buffer
function?