Open MikeJPelton opened 10 months ago
Hi, this is a good question. we didn't assume forward-facing cameras. We didn't test on the inward facing dataset. But I think our method can still work with the inward dataset with camera arrays. For example, the neural 3D dataset is not a pure forward-facing dataset.
It seems like there's an assumption that the cameras are all forward of the subject in Z, with Z being the central axis of the scene. You're right, all the cameras don't have to look along -Z, but the concept of "depths" seems to be pretty central to the code. I've tried various synthetic datasets projecting meshes from virtual arrays in assorted formats, and as soon as any cameras start to wrap around the subject even minimally the results get very foggy! It sounds as though this is coming as a surprise to you and I could of course be doing something wrong anyway - either way the approach is very promising indeed so I'll carry on digging.
For a synthetic dataset, R and T is important. I know there is a new update about 3D GS.
Have you tried that ?
Turns out everything needs to be in positive Z!!
Many thanks for this excellent piece of work. I notice you seem to be assuming forward-facing cameras and am trying to work out what would be needed to also allow for "looking inward" datasets where the cameras surround the subject. If you have any suggestions or guidance I would be very grateful!