Open ThomasParistech opened 2 years ago
Hi Thomas @ThomasParistech ,
Were you able to find a solution ? , I have a similar scene where the camera moves all along the scene. That is no specific area/object of interest.
Thanks !
Also meet problems when reconstructing scens like this. Any suggestions?
Also interested in an answer to this.
@yyashpatel I'm still looking for a meaningful way of handling scale offset and aabb_scale...
Instead of taking the intersection of the camera forward vectors, I fit a 2D rotated BBox on my camera poses from a bird's eye view (Assuming the floor is flat) and take its center and diagonal to estimate the scale/offset parameters to nerf. The only problem is that I don't know what's the best scaling to apply to it: should all the scene fit in the unit cube? Should the scene be 4/3 larger?
@ThomasParistech Could you elaborate more on what you mean by this?
F2-NeRF solves this problem elegantly https://totoro97.github.io/projects/f2-nerf/
Many thanks for your work :fire:
As mentioned in your very helpful note about Nerf tuning (Tips for training NeRF models with Instant Neural Graphics Primitives), the script colmap2nerf.py targets scenes when most of the cameras are oriented toward a fixed point. The line
f["transform_matrix"][0:3,3] *= 4.0 / avglen
is designed so that cameras looking at the center are lying just outside the unit cube (distance 4/3 from center(0.5,0.5,0.5)
, like in the Lego below)However, when rendering a scene where the information is evenly distributed, like in a room, I'm wondering if it still really makes sense to keep preprocessing the input poses like incolmap2nerf.py.
Here's how my camera path looks like when I go back and forth in a rectangular room. It has been scaled using colmap2nerf.py.
As you can see there's no specific central area and we could simply find the cubic aabb that encompasses our scene (using poses and/or pointcloud) to map it to the unit cube. Then we can enhance transform.json with scale, offset and _aabbscale.
Here are my questions: 1) Do you have special tips for such evenly distributed scenes?
2) scale and aabb_scale seem to have a similar role: defining the training volume. It seems like several combinations of them would lead to the same result. However, scaling the scene must have an impact on the accuracy since the number of networks parameters per m3 isn't the same. Is the only difference between setting the training volume with either scale or aabb_scale a question of training time and spatial resolution? I'd like to fix the scale and select the aabb_scale automatically so that the entire scene fit in the aabb
3) If my scene is flat, is it possible to have a non-cubic training aabb? Applying non uniform scaling to preprocess the input poses doesn't seem like a good idea ;)
Thanks !