Closed KDWJD closed 3 months ago
Hi. Thank you for taking the time to send this request. Unfortunately, this feature falls outside the scope of our Capture products and will not be pursuing it at this time. We have taken note and if we decide to pursue the metaverse than we will definitely revisit the topic. Thanks!
The Metaverse will not only consist of JPEG type 2D images minted as NFT's. It will also require 3D objects in various file formats to be created as well. AR API's on both IOS and Android leverage visual inertial odometry to the extent that point clouds can be generated with significant levels of accuracy to allow for a mobile device to render 3 dimensional designs. The problem with these designs directly captured from a mobile device is the inherent backgrounds of the object that the user is trying to capture comes into the final file as well.
What I would like to see developed in the capture app is (as an example) the incorporation of the IOS ARKit where the user has the ability to capture a point cloud leveraging their mobile devices camera and/or Lidar if available, and then bring the real world "subject" into a editing studio where it can be edited to remove the background point cloud references that aren't desirable to the creators final rendered file.
I've seen similar products doing the render without an editing feature such as seen in the ScandyPro app, however it is not ideal from a user experience to leave the app and go into another application to do final rendering. This aforementioned app allows the user to export in the following file formats .PLY, .OBJ, .STL, .USDZ, .GLB...
Select label Capture App
Additional context
┆Issue is synchronized with this Asana task by Unito