Closed xiexh20 closed 2 years ago
Hi Xianghui, the flatten loss is borrowed from SoftRas. See the explanation here.
Thank you so much for your explanation.
I have another question about the principal points prediction and processing part: https://github.com/google/lasr/blob/29d8759354f853119276f41504d6b527fd5484e5/nnutils/mesh_net.py#L209-L217 I understand that the first two lines normalize the top left corners of the cropped patch, but why is it necessary to compute ppoints of the next frame from current frame?
Also in your paper you said it is assumed the principle points are in the center of the image, why it is still needed here to predict the ppoints? Can we use zeros for this?
why is it necessary to compute ppoints of the next frame from current frame?
We found using a constant principal points for a given frame pair (when computing the flow loss) produces better camera poses than predicting two separate pps. A reason is that separate pps are too flexible, which allows the change over pps to explain the majority of observed flow (instead of explaining by camera motion).
Also in your paper you said it is assumed the principle points are in the center of the image, why it is still needed here to predict the ppoints? Can we use zeros for this?
We could use a constant pp. But in practice that causes non-overlapping rendering issue in certain cases: see https://github.com/gengshan-y/viser-release/issues/5
I see, thank you so much for your quick response!
Dear Authors,
Thank you so much for the great work. While reading your source code, I found there is a flatten loss here. This loss is not discussed in the paper and it is also not well explained in the code. Can you explain what this loss is about? Thank you very much!
Best, Xianghui