I first tried normalizing my data to fit inside the bbox by modifying the extrinsic and intrinsic matrices of my cameras directly before creating the transforms.json files, and this worked well. I am now trying to preprocess my data such as to use the scale and offset parameters of the transforms.json so that my data is normalized inside your code and the output mesh is automatically converted back to ground truth coordinates. This also works, the output meshes are the same as before.
However, the rendering of the prediction mesh seems to not work anymore, I get a white image (output/images/XXXX/frame_X_mesh.png). Would you have an idea why?
It would be very useful if this worked so I can check my results faster (especially when training the dynamic version, i.e. being able to check the mesh.gif).
Hi,
I first tried normalizing my data to fit inside the bbox by modifying the extrinsic and intrinsic matrices of my cameras directly before creating the transforms.json files, and this worked well. I am now trying to preprocess my data such as to use the scale and offset parameters of the transforms.json so that my data is normalized inside your code and the output mesh is automatically converted back to ground truth coordinates. This also works, the output meshes are the same as before. However, the rendering of the prediction mesh seems to not work anymore, I get a white image (output/images/XXXX/frame_X_mesh.png). Would you have an idea why? It would be very useful if this worked so I can check my results faster (especially when training the dynamic version, i.e. being able to check the mesh.gif).
Thanks for your help!