This is a relatively complex enhancement as the depth data is utilized all over the app, and we will have to ensure that the app doesn't break.
We could set up a dedicated class that is used to get depth data, which takes care of checking which way depth can be calculated: LiDAR, monocular-depth estimation (not currently supported), or simply 0-depth (as fallback).
This would ensure that we have at least some so
This is a relatively complex enhancement as the depth data is utilized all over the app, and we will have to ensure that the app doesn't break.
We could set up a dedicated class that is used to get depth data, which takes care of checking which way depth can be calculated: LiDAR, monocular-depth estimation (not currently supported), or simply 0-depth (as fallback). This would ensure that we have at least some so
https://github.com/TaskarCenterAtUW/iOSPointMapper/blob/b93a57e211ea4e2f49269f466ca6e6a96948a867/IOSAccessAssessment/Camera/CameraController.swift#L71-L77