Closed zhouqi97456 closed 1 year ago
Hi,
Added --render_from_file
argument to load camera poses and render from them.
It follows NeRF synthetic's format, but only uses transform_matrix
. Rendered images and videos should be written to ${LOG_DIR}/<render_from_file file basename>/
.
Once the model is optimized, you can render with poses from ${POSE_FILE}
using:
python localTensoRF/train.py --datadir ${SCENE_DIR} --logdir ${LOG_DIR} --render_only 1 --render_test 0 --render_path 0 --render_from_file ${POSE_FILE}
Thank you very much for your answer! I have tried to render image after rotating the camera poses 60 degrees based on the original path(Left-multiply a rotation matrix by the R matrix in transform_matrix). As shown in the figure below, the left image is rendered according to the original path, and the right image is rendered by rotating the camera by 60 degrees around the y-axis The rendered image on the right looks as if it has a large distortion. Do you know how to fix it? thank you !
I would like to check on my end. Is this the first spline pose rotated 60° to the right?
It is the fifth spline pose.
The changes I made in train.py are shown in the following image, and the rendered images suffer from large distortion. please let me know if I'm going about this the wrong way.
Hi,
Using 0:2
does not get the full rotation matrix.
To rotate all views, you should use something like:
c2ws[..., :3, :3] = torch.matmul(ro_M, c2ws[..., :3, :3])
Hi! It worked! Thank you very much for helping me solve this problem!!
But I found another problem, when i rotated the camera pose, the occlusion relationship between objects with different depths doesn't seem to change. It looks more like that all the objects seem to have just undergone an equal translation.
As shown in the pic, the right is the result after rotation.
Hi, With rotation only and the camera location unchanged, we should not observe parallax. Did I understand your concern correctly?
I understand !! Thank you very very much !!
python localTensoRF/train.py --datadir ${SCENE_DIR} --logdir ${LOG_DIR} --render_only 1 --render_test 0 --render_path 0 --render_from_file ${POSE_FILE}
The result is jpg is it? I want a mesh or obj file as a result. How can I get that result?
We do not provide a way to export a mesh.
Hi, your paper mentions that local-nerf can render a novel view image that deviates from the original path, I want to ask how can I achieve the synthesis of the novel view image? Thank you!!