Closed lucasjinreal closed 1 year ago
my guess is you require intrinsic, so you'll probably need to run an SfM pipeline line COLMAP or openMVS/MVG to estimate the camera intrinsic/extrinsic
@pablovela5620 will there be a out-of-box demo? I wanna using my Phone to take some pictures, then recoonstruct the 3d scene, don't know how to make use of simple recon
I found this in their code https://github.com/nianticlabs/simplerecon/blob/99b283099b2a8a99820962a950c89daa88412a97/datasets/arkit_dataset.py#L16
looks like they use NeuralRecons ARKit app to get the required metadata when using an IPhone
@pablovela5620 thanks for the info, what's metadata NeuralRecons mainly provide? I can not find this app on appstore.
Hello, thanks for your interest in the code! And thanks @pablovela5620 for helping point in the right direction.
Here are a few points that should make it easier to try out SimpleRecon with different levels of commitment:
We'll be releasing a sample scene for you to try running the code on; this will be one of our own. Expect this soon.
We'll be showing you how to run one of NeuralRecon's recorded scenes. The code already has a class and helper functions to use those scans. Indeed, the demo readme that NeuralRecon provide is the one you should follow too if you want to record something you've shot yourself. Either way, there is a way to run those scenes in the repo.
We also have a dataset class ready to ingest Scanniverse data. This app is on the app store. Although SimpleRecon just needs posed images, the app requires a phone with LiDAR as it's geared towards reconstructions from a depth sensor. I have not tested if it can still record images and poses on an iPhone without LiDAR.
More soon. :D
@mohammed-amr looking forward to your outofbox demo
I also had a few questions, I see you also have an implementation for using COLMAP and would love to see if you'll include some documentation as to how to use. Along with this, are there any plans to include ARCore capable devices as a potential input source? Appreciate the work you and the team have done!
- We also have a dataset class ready to ingest Scanniverse data. This app is on the app store. Although SimpleRecon just needs posed images, the app requires a phone with LiDAR as it's geared towards reconstructions from a depth sensor.
I'm curious about the app. Is it record3D?
We've uploaded two scans in VDR format to try out the box. More to follow.
Hi @mohammed-amr
After version 2.0, the Scanniverse APP can been used on an iPhone without LiDAR (test on my iphone 12 mini), and can save raw data (include RGB-D pairs?, see link). However, I do not know how to extract these raw data via python or other tools, any ideas?
Looking forward to your reply:)
At present there is no publically available way of exporting scans from Scanniverse. You'll have to use ios-logger; NeuralRecon have a good tutorial on this, and a dataloader that accepts the processed format is at datasets/arkit_dataset.py.
There is now quick readme data_scripts/IOS_LOGGER_ARKIT_README.md for how to process and run inference an ios-logger scan using the script at data_scripts/ios_logger_preprocessing.py.
I'm exploring other options, but for now if you want capture data yourself you'll have to get ios-logger up and running.
how to test on internet images without intrinsic?