facebookresearch / localrf

An algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.
MIT License
976 stars 60 forks source link

How to render novel views that deviate from the training path? #20

Closed zhouqi97456 closed 1 year ago

zhouqi97456 commented 1 year ago

Hi, your paper mentions that local-nerf can render a novel view image that deviates from the original path, I want to ask how can I achieve the synthesis of the novel view image? Thank you!!

ameuleman commented 1 year ago

Hi, Added --render_from_file argument to load camera poses and render from them. It follows NeRF synthetic's format, but only uses transform_matrix. Rendered images and videos should be written to ${LOG_DIR}/<render_from_file file basename>/. Once the model is optimized, you can render with poses from ${POSE_FILE} using:

python localTensoRF/train.py --datadir ${SCENE_DIR} --logdir ${LOG_DIR} --render_only 1 --render_test 0 --render_path 0 --render_from_file ${POSE_FILE}
zhouqi97456 commented 1 year ago

Thank you very much for your answer! I have tried to render image after rotating the camera poses 60 degrees based on the original path(Left-multiply a rotation matrix by the R matrix in transform_matrix). As shown in the figure below, the left image is rendered according to the original path, and the right image is rendered by rotating the camera by 60 degrees around the y-axis 微信图片_20230711143906 The rendered image on the right looks as if it has a large distortion. Do you know how to fix it? thank you !

ameuleman commented 1 year ago

I would like to check on my end. Is this the first spline pose rotated 60° to the right?

zhouqi97456 commented 1 year ago

It is the fifth spline pose.
The changes I made in train.py are shown in the following image, and the rendered images suffer from large distortion. please let me know if I'm going about this the wrong way.

change

ameuleman commented 1 year ago

Hi, Using 0:2 does not get the full rotation matrix. To rotate all views, you should use something like:

        c2ws[..., :3, :3] = torch.matmul(ro_M, c2ws[..., :3, :3])  
zhouqi97456 commented 1 year ago

Hi! It worked! Thank you very much for helping me solve this problem!!

But I found another problem, when i rotated the camera pose, the occlusion relationship between objects with different depths doesn't seem to change. It looks more like that all the objects seem to have just undergone an equal translation.

As shown in the pic, the right is the result after rotation. image

ameuleman commented 1 year ago

Hi, With rotation only and the camera location unchanged, we should not observe parallax. Did I understand your concern correctly?

zhouqi97456 commented 1 year ago

I understand !! Thank you very very much !!

EastShineK commented 1 year ago

python localTensoRF/train.py --datadir ${SCENE_DIR} --logdir ${LOG_DIR} --render_only 1 --render_test 0 --render_path 0 --render_from_file ${POSE_FILE} The result is jpg is it? I want a mesh or obj file as a result. How can I get that result?

ameuleman commented 1 year ago

We do not provide a way to export a mesh.