Closed haowang020110 closed 7 months ago
Hi there,
Thanks for reaching out. The first snippet is from the rendering script ( linking below) used during the paper's preparation to manipulate the original poses and render novel trajectories. When modifying the original poses, there is no guarantee the original depth bounds are appropriate to render from the new viewpoints. Slightly increasing the depth bounds, for each modified pose may help including tissue areas that were too close or too far from the original camera pose and end up being outside the pre-computed bounds when the pose is modified. Having said that, if the modified poses deviate a lot from their original, this simple trick may not be enough https://github.com/surgical-vision/REIM-NeRF/blob/9be61a9700896223ad1a5cfd45067ee375f40184/reimnerf/datasets/reim_json_render.py#L207-L209
The second snippet is from the data preparation script which reads the original depth-maps and generates the bounds for the original poses. https://github.com/surgical-vision/REIM-NeRF/blob/7578806b92c28cfdc41d89bb1e6c80eacb5a9f70/reimnerf/datasets/preprocessing/raw_data.py#L447-L448
If you do not manipulate poses and only render from the original viewpoints you do not need to modify the bounds as done in the rendering script (first snippet).
Thanks a lot for your kind and detailed answer!
Hi,thanks to your work at first! I notice that in https://github.com/surgical-vision/REIM-NeRF/blob/9be61a9700896223ad1a5cfd45067ee375f40184/reimnerf/datasets/reim_json_render.py#L207-L209 I am confused about why there need to scale near/far bound again after having scaled in preprocessing https://github.com/surgical-vision/REIM-NeRF/blob/7578806b92c28cfdc41d89bb1e6c80eacb5a9f70/reimnerf/datasets/preprocessing/raw_data.py#L447-L448