victorprad / InfiniTAM

A Framework for the Volumetric Integration of Depth Images
http://www.infinitam.org
Other
918 stars 351 forks source link

Are AVDepthData images accurately enough as input for InfiniTAM? #111

Open jorrit-g opened 5 years ago

jorrit-g commented 5 years ago

Dear Readers,

With the new dual camera setup on iPhone 8+ and iPhone X developers have easy and realtime access to depth images.

However, these depth images are of relative measure only (i.e. we can determine that one pixel is further away than other, but we cannot determine the absolute distance between them). Also, the measurement of disparity changes drastically from photo to photo. Even when camera and scene remain static.

My question is than whether these depth images can actually be used as input? And what level of accuracy can I expect? Objects to be scanned are humans.

Thank you

neycyanshi commented 5 years ago

I have tried to use iPhone X front true depth as input. The resolutoin is 640x360, you can choose to output absolute depth in metres, as kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32, see available Pixel Format Identifiers. The depth quality is good, and can be used as input. But the range is short, maybe half human body.

kaccie14 commented 4 years ago

@neycyanshi the TrueDepth camera uses both disparity and structured lighting, so it actually does get an accurate real-world depth. But with the iPhone 8+ or iPhone X's back camera, only disparity is available and its accuracy is highly dependent on knowledge of the optical centers of the 2 cameras. Apple does not claim to have calibrated out this error which may vary quite a bit across devices.

Please refer to this page