Closed Khoa-NT closed 1 year ago
Hi, You are correct. Also, (2) is each radiance field's pose and (3) contains the estimated camera poses.
Hi, thank you for the clarification. Then I can use (3) as camera poses for training another NeRF model.
I'm curious about (2), is it the local camera poses for the Local radiance fields in sec 3.3 in the paper?
Hi, Yes, specifically the t_j in eq. (7).
@ameuleman @Khoa-NT How to generate the transform.json on our own dataset? Is there any tutorial?
@ameuleman @Khoa-NT How to generate the transform.json on our own dataset? Is there any tutorial?
Hi,
transforms.json
files in our dataset correspond to COLMAP poses. We run COLMAP using MultiNeRF's script.
Note that transforms.json
is not needed to optimize LocalRF, except with the argument --with_preprocessed_poses 1
.
@ameuleman @Khoa-NT How to generate the transform.json on our own dataset? Is there any tutorial?
Hi,
transforms.json
files in our dataset correspond to COLMAP poses. We run COLMAP using MultiNeRF's script. Note thattransforms.json
is not needed to optimize LocalRF, except with the argument--with_preprocessed_poses 1
.
Hi, Thanks for quick replay!
Oh right, there is a conversion required after COLMAP. Instant-NGP's script can achieve it from COLMAP outputs.
Thanks a lot!
Hi, thank you for sharing the code of this amazing research. Can I ask what is the difference (or use case) of each 3
transforms.json
files after finishing the training?It confused me because you also have another one in the dataset:
hike_scenes_localrf/forest1/transforms.json
(4)I guess (4) is created from colmap and (1) is used for rendering the video.