Closed shi-yan closed 3 years ago
The initialization of the RGB camera poses relative to the estimated depth poses (and hence relative to the initial object reconstruction) should be as good as possible. The alignment does not need to be perfect (as the color poses are co-optimized), but some sort of synchronization between the two cameras is needed therefore in my opinion. If you have a synchronization/association between depth and RGB frames and you also know the calibration of the (fixed) color camera intrinsics, then it should work in theory.
I notice the color image in the demo datasets have the resolution 1296X968. Well in my data, I just have the the color image 640*480 which is the same as the depth image. I want to know wither I could use my data. @robmaier
You can technically of course use your data, as long as you also use the correct corresponding camera intrinsics. It is very likely though that the low-resolution color images might not exhibit enough level-of-detail for a high-quality refinement ... :/
Hello there,
I hope to do 3d reconstruction using your work with a depth camera. my device can capture depth images without the RGB images. I could compensate that by attaching a normal camera to do the RGB capturing.
Will that work? I'm wondering if the two images, RGB and D, need to be aligned perfectly? Do the 2 cameras have to be in sync?
Thanks!