Open OrangeSodahub opened 6 months ago
AFAIK:
transforms.json
= this file records camera poses, images resolution, camera model. It is requirement for using ns-train
with nerfstudio-data dataparser.
Export camera from web viewer = this file records render video trajectory, so you can create custom video which the camera position, PoV, and trajectory recorded in it. It is requirement for using ns-render
.
You can not use transforms.json
for ns-render
and vice versa
@ichsan2895 Thanks for your reply. That's the problem, both these two files contain camera poses, then how to convert one from another one? Is that impossible?
Why you want to convert it? You can not use transforms.json
for ns-render
and vice versa since it has different functionality.
Because I want to see the camera poses via loading them manually and draw my own cameras according to existing ones. I'll refer to ns-render
to find it out. Thanks. But do you know about what the difference is?
I am looking for a similar solution as yours. I am trying to find a solution that would allow me to 1) create a camera path from validation images, 2) load the same view in Blender using the NerfStudio Blender plugin, 3) import a mesh into Blender, and 4) generate the same view as the validation images but from the imported mesh. Have you had any luck with that?
We can find dataparser_transforms.json
in the output directory, containing the 4x4 global transformation matrix $T$ and the scale factor $s$.
Given $T$ and $s$, we can convert transform_matrix
$A$ in transforms.json
to matrix
$B$ in camera_path.json
by:
$$ B = (TA) \cdot s $$
I've figure it out. Will post some guidelines later when I get avalible.
@OrangeSodahub im trying to extract camera poses from transforms.json. how did you do it?
I've figure it out. Will post some guidelines later when I get avalible.
I'm also interested in this matter!
Hi, guys, all we need to know is there exist three possible types of coordiantes of camera extrinsics:
transforms.json
files contains, call it 'nerfstudio' for simplicity.So eveything can be handled through cascade convertion: nerfstuio - opengl - opencv. For example:
# convert between opencv and nerfstudio
def colmap_to_nerfstudio(c2w):
c2w[..., 0:3, 1:3] *= -1
c2w = c2w[..., np.array([1, 0, 2, 3]), :]
c2w[..., 2, :] *= -1
return c2w
# convert between opencv and opengl
def opengl_to_opencv(c2w):
transform = np.array([[1, 0, 0], [0, -1, 0], [0, 0, -1]])
if isinstance(c2w, torch.Tensor):
transform = torch.Tensor(transform).to(c2w)
c2w[..., :3, :3] @= transform
return c2w
So back to the problem of extracting camera path from transforms json file, note that here camera path is the one we will used in nerfstudio's webgui. So we just need to do the convertion from nerfstudio to opencv, and opencv to opengl.
All above things are validated by hand, and I didn't use any existing tools in nerfstudio. Remember to determine what is your source format and what is your target format according to the usage.
How abour using 'ns-render dataset'
Then should we also apply the transformation matrix in data parser_transform.json? Should we apply before the nerfstudio to opencv conversion or where else? Thanks!
Well,
ns-export camera
gets thetransforms.json
file, here is an example:And if I manually export camera path through
export
button in web viewer to save camera paths, they look like:So the question is, how to explicity transfrom between these two versions? To load camera paths from
transforms.json
file throughLoad
button in camera panel. Is there any script inside nerfstudio to do so? I didn't find that. Thanks!