Camera tracking should (probably) work better using aggregate data (from the canonical frame) rather than the previous live frame. The rationale is that for faster camera movements, rediscovery of previously built regions not present in the previous frame could go far to improve tracking robustness.
In all fusion algorithms to date, tracking is done using aggregate data, not live frame, probably for this very reason.
Camera tracking should (probably) work better using aggregate data (from the canonical frame) rather than the previous live frame. The rationale is that for faster camera movements, rediscovery of previously built regions not present in the previous frame could go far to improve tracking robustness. In all fusion algorithms to date, tracking is done using aggregate data, not live frame, probably for this very reason.