Open nerfology opened 2 years ago
@nerfology Hi, the current density grid for d-nerf is simply a [T, H^3]
boolean array, dividing the time into T
intervals. For any time t
, we choose the grid [H^3]
in the corresponding time interval, so some noises may be observed if the time crosses two intervals.
Increasing T will possibly make the transition more smooth, but it also takes more storage, and maybe slower convergence.
Do you have any idea on how to condition the time in a more smooth way?
Thanks for the explanation. Indeed, increasing T takes more storage and I run out of memory.
Do you have any idea on how to condition the time in a more smooth way?
Currently, I don't really have an idea. I have thought of averaging the grids across T to "smooth out" the animation but I still have to figure out how the density grid is updated.
Thanks for this great repo.
I have noticed that in some animations we can see noise as "cubes", when changing the time variable. I have narrowed it down to the selection of the entry in the density bitfield array that corresponds to the current time, which then renders a slightly different (but not view consistent) density grid causing some slight change in the rendering. My first question is why not make the density grid dependent on time (as an input) so that it evolves more smoothly through time? And if I have missed something, what could I do to get less (density grid) noise when changing the time variable?
Precision: The noise I see is often on edges or complex appearances and produces a blocky appearance, like if the density grid at a specific location was lower, changing the time, I see these "blocks" appear randomly around (in a subtle but noticeable fashion).