Closed Kulbear closed 3 years ago
Hi @Kulbear , thanks for your interest in our project!
We use the transform_to_world
function to project pixels with a depth value to the world space. There, you can also find the inverse transform, transform_to_camera_space
which is the function which maps 3D world points to the camera space. To get out the final pixel values, you have to divide the first two coordinates by the third which is the depth in camera space. To give an example, we are using the function here to get the depth value in camera space.
Good luck with your research!
@m-niemeyer Thank you for your great work! Can you tell me which configuration can generate the predict camera_mat/world_mat/Scale_mat along with the meshes?
Hello,
Thanks for your great work on this problem! I am wondering whether it is also possible to get the projection matrix that can map the output mesh back to the 2D image?
Thanks!