Open sixftninja opened 4 years ago
With a depth sensor, the techniques here are not necessary (our method is specifically designed for cases when a depth sensor is not available). With a depth sensor TSDF Fusion can be directly used to integrate depth maps into a TSDF. Our PyTorch implementation of TSDF Fusion might be useful for that (https://github.com/magicleap/Atlas/blob/master/atlas/tsdf.py : TSDFFusion()). Note that this still requires the camera pose at each frame (which may be possible to acquire from ARkit on IPad instead of COLMAP).
However, depth sensors have limitations and a joint approach that incorporates depth sensors into the Atlas framework is interesting future research.
Thanks for your reply, I'll try this out.
Hi, I have an Ipad pro 2020. can you both tell me how to extract camera poses using an ARKit?
@Shisthruna28 you can use ARFrame class to estimate camera poses. You can check detailed information from ARKit API doc
iPad Pro 2020 provides the depth map and camera matrix for each frame. will it be possible to use these instead of running COLMAP etc?