Closed henrypearce4D closed 1 month ago
Our method can be applied to the reconstruction of 360-degree images, because its essence is view interpolation. The poor result may be due to the poor depth predicted by MVS due to the small overlap of input images, or the misalignment of camera conventions. Our extrinsics are OpenCV-style camera-to-world matrices, i.e., COLMAP format. This means that +Z is the camera look vector, +X is the camera right vector, and -Y is the camera up vector. Perhaps providing more visualization (input and output) would help us analyze this problem better.
Does your code support a mixture of lenses and w x h dimensions?
I saw that 5 cameras were parsed so think it does, I'm just trying to check what might be wrong apart from what you suggested above.
The camera Intrinsics parameters for the data we use are in the following format: [fx, 0, w//2; 0, fy, h//2; 0, 0, 1] The distortion factor is not taken into account.
Thanks for the updated readme and demo for custom datasets!
The results are poor, my dataset is inward facing 360 ring of cameras of an object not simply looking out at the object from one direct ion and moving the camera around on a plane.
Is this code capable of producing results from images taken fully around an object?