Prior to training, several coordinate system transformations are applied. This means that a mesh extracted from the NeRF is no longer in the world coordinates of the original camera system. I would like to be able to undo this series of transformations so that I can put the mesh back into original world coordinates. I think I know how to do this but it doesn't work...
Specifically, I am using the load LLFF code, factor=1, spherify set to true and recenter is set to false. This means there is
Then obviously there is a scale and translate applied by the conversion to voxel coordinates in marching cubes. Undoing all of this almost works - scale and orientation looks right but there's always a significant difference in translation.
Has anyone managed to undo this series of transformations? Or is there an easier way of directly calculating the transformation to undo them?
Prior to training, several coordinate system transformations are applied. This means that a mesh extracted from the NeRF is no longer in the world coordinates of the original camera system. I would like to be able to undo this series of transformations so that I can put the mesh back into original world coordinates. I think I know how to do this but it doesn't work...
Specifically, I am using the load LLFF code, factor=1, spherify set to true and recenter is set to false. This means there is
Then obviously there is a scale and translate applied by the conversion to voxel coordinates in marching cubes. Undoing all of this almost works - scale and orientation looks right but there's always a significant difference in translation.
Has anyone managed to undo this series of transformations? Or is there an easier way of directly calculating the transformation to undo them?