nianticlabs / simplerecon

[ECCV 2022] SimpleRecon: 3D Reconstruction Without 3D Convolutions
Other
1.28k stars 120 forks source link

how to test on internet images without intrinsic? #1

Closed lucasjinreal closed 1 year ago

lucasjinreal commented 1 year ago

how to test on internet images without intrinsic?

pablovela5620 commented 1 year ago

my guess is you require intrinsic, so you'll probably need to run an SfM pipeline line COLMAP or openMVS/MVG to estimate the camera intrinsic/extrinsic

lucasjinreal commented 1 year ago

@pablovela5620 will there be a out-of-box demo? I wanna using my Phone to take some pictures, then recoonstruct the 3d scene, don't know how to make use of simple recon

pablovela5620 commented 1 year ago

I found this in their code https://github.com/nianticlabs/simplerecon/blob/99b283099b2a8a99820962a950c89daa88412a97/datasets/arkit_dataset.py#L16

looks like they use NeuralRecons ARKit app to get the required metadata when using an IPhone

lucasjinreal commented 1 year ago

@pablovela5620 thanks for the info, what's metadata NeuralRecons mainly provide? I can not find this app on appstore.

pablovela5620 commented 1 year ago

https://github.com/zju3dv/NeuralRecon/blob/master/DEMO.md

mohammed-amr commented 1 year ago

Hello, thanks for your interest in the code! And thanks @pablovela5620 for helping point in the right direction.

Here are a few points that should make it easier to try out SimpleRecon with different levels of commitment:

  1. We'll be releasing a sample scene for you to try running the code on; this will be one of our own. Expect this soon.

  2. We'll be showing you how to run one of NeuralRecon's recorded scenes. The code already has a class and helper functions to use those scans. Indeed, the demo readme that NeuralRecon provide is the one you should follow too if you want to record something you've shot yourself. Either way, there is a way to run those scenes in the repo.

  3. We also have a dataset class ready to ingest Scanniverse data. This app is on the app store. Although SimpleRecon just needs posed images, the app requires a phone with LiDAR as it's geared towards reconstructions from a depth sensor. I have not tested if it can still record images and poses on an iPhone without LiDAR.

More soon. :D

lucasjinreal commented 1 year ago

@mohammed-amr looking forward to your outofbox demo

pablovela5620 commented 1 year ago

I also had a few questions, I see you also have an implementation for using COLMAP and would love to see if you'll include some documentation as to how to use. Along with this, are there any plans to include ARCore capable devices as a potential input source? Appreciate the work you and the team have done!

stevewongv commented 1 year ago
  1. We also have a dataset class ready to ingest Scanniverse data. This app is on the app store. Although SimpleRecon just needs posed images, the app requires a phone with LiDAR as it's geared towards reconstructions from a depth sensor.

I'm curious about the app. Is it record3D?

mohammed-amr commented 1 year ago

We've uploaded two scans in VDR format to try out the box. More to follow.

Lilyo commented 1 year ago

Hi @mohammed-amr

After version 2.0, the Scanniverse APP can been used on an iPhone without LiDAR (test on my iphone 12 mini), and can save raw data (include RGB-D pairs?, see link). However, I do not know how to extract these raw data via python or other tools, any ideas?

Looking forward to your reply:)

mohammed-amr commented 1 year ago

At present there is no publically available way of exporting scans from Scanniverse. You'll have to use ios-logger; NeuralRecon have a good tutorial on this, and a dataloader that accepts the processed format is at datasets/arkit_dataset.py.

There is now quick readme data_scripts/IOS_LOGGER_ARKIT_README.md for how to process and run inference an ios-logger scan using the script at data_scripts/ios_logger_preprocessing.py.

I'm exploring other options, but for now if you want capture data yourself you'll have to get ios-logger up and running.