Closed dongyh20 closed 11 months ago
How can I get the extrinsic matrix for each view?
Hi @dongyh20,
Sorry for the confusion! The transform_matrix is what I used in the instant-ngp camera coordinates and that's got re-assigned to align with the camera coordinates in the instant-ngp coordinate system.
Hi @SteveJunGao @dongyh20 @zgojcic , I am a new for 3D and wonder the transform_matrix can be used as camera_mv_bx4x4 for the following code?
`class NeuralRender(Renderer): def init(self, device='cuda', camera_model=None): super(NeuralRender, self).init() self.device = device self.ctx = None self.projection_mtx = None self.camera = camera_model
def render_mesh(
self,
mesh_v_pos_bxnx3,
mesh_t_pos_idx_fx3,
camera_mv_bx4x4,
mesh_v_feat_bxnxd,
resolution=256,
spp=1,
device='cuda',
hierarchical_mask=False
):`
Hi @yjcaimeow,
No, transform_matrix
can not be used as camera_mv_bx4x4
in the code you pasted. The reason is they're not in the same camera coordinate system (We assume OpenGL camera when rendering the shapes).
You can check this line that computes the camera_mv_bx4x4
matrix from the angles of the camera
Closing this issue as haven't heard back for two months, please feel free to reopen it if you still find the problem!
I'm wondering if the transform_matrix in transforms.json in shapenet is the extrinsic matrix? It seems that rt was reassignment in line 259-260 in render_shapenet.py. I've checked the matrix_world in blender is used to transform from the object space to the world space, so I'm getting confused.