With the digital asset data the Capture app is compiling and the image/video itself, in theory you could crowd source PoL through PoW as a means to geotagging real world assets for a Metaverse application or other purposes... The way I see this working is essentially stitching (or meshing) your media together using DL for an images vector alignment in the stitch, then using the geolocation of each capture from each users device, and deriving a probabilistic localization algorithm to certify and/or refactor the GPS location coordinates produced from the users device, and determine if real vs not, you would effectively then have PoW on PoL.
With the digital asset data the Capture app is compiling and the image/video itself, in theory you could crowd source PoL through PoW as a means to geotagging real world assets for a Metaverse application or other purposes... The way I see this working is essentially stitching (or meshing) your media together using DL for an images vector alignment in the stitch, then using the geolocation of each capture from each users device, and deriving a probabilistic localization algorithm to certify and/or refactor the GPS location coordinates produced from the users device, and determine if real vs not, you would effectively then have PoW on PoL.
It's from issue report: https://github.com/numbersprotocol/community-support/issues/28
┆Issue is synchronized with this Asana task by Unito ┆Created By: Kenny Hung