I am capturing a dataset consisting of multiple camera videos. I want to get the flame parameters for each camera separately and that is why i am running this code on each video separately, and the model provides good results when it uses the pytorch3d's PerspectiveCameras to generate the initial R and T and then optimize it over the training process. I tried changing the code to accept the R and T and also the principal point and focal length from the already calibrated cameras and remove optimization for R and T since it is known and fixed, But the output is not as expected: there are a lot of differences and also in some of the frames the flame mesh is not even completely formed.
Could you tell me how I can include the camera parameters in the training phase
If there is a way to use the pytorch3d's perspective cameras to generate the flame parameters and then rotate the mesh based on the pre-calibrated R and T. (I tried this approach but the flame mesh does not align with the face from original video)
Hello Zielon,
I am capturing a dataset consisting of multiple camera videos. I want to get the flame parameters for each camera separately and that is why i am running this code on each video separately, and the model provides good results when it uses the pytorch3d's
PerspectiveCameras
to generate the initial R and T and then optimize it over the training process. I tried changing the code to accept the R and T and also the principal point and focal length from the already calibrated cameras and remove optimization for R and T since it is known and fixed, But the output is not as expected: there are a lot of differences and also in some of the frames the flame mesh is not even completely formed.Thanks in advance.
With Regards, Vippin