nianticlabs / simplerecon

[ECCV 2022] SimpleRecon: 3D Reconstruction Without 3D Convolutions
Other
1.3k stars 121 forks source link

How to run reconstruction on your own data? #19

Closed ShivaniKamtikar closed 1 year ago

ShivaniKamtikar commented 1 year ago

Hello,

I have videos of a scene and I wanted to run the reconstruction on my own data. How do I go about doing that? I saw that we need some kind of scans? Is there a way to directly use videos or frames (images) to generate reconstruction?

I am a little clueless about how to use this on my data for my research and any help is appreciated.

Thank you!

Shivani

jgibson2 commented 1 year ago

Hi @ShivaniKamtikar, do you have posed data? If not, you can try to run COLMAP or other software to obtain them. Given the poses and images, you should be able to run SimpleRecon on your data, perhaps with a bit of massaging.

pablovela5620 commented 1 year ago

https://github.com/nianticlabs/simplerecon/blob/main/data_scripts/IOS_LOGGER_ARKIT_README.md this will probably be your best bet. I couldn't quite get COLMAP working, but this worked great.

You can also take a look at this issue https://github.com/nianticlabs/simplerecon/issues/4

ShivaniKamtikar commented 1 year ago

I do not have posed data. I will try the Arkit and the COLMAP methods. For the Arkit method, is it necessary to use images from an ios device? The readme points to that.

I have images from a tip camera on my robotic arm which gives RGB images.

mohammed-amr commented 1 year ago

Hello Shivani,

The easiest route is indeed the ios-logger app.

For COLMAP, I'd be wary of a few points:

  1. The model we provide has been trained with images from ScanNetv2, which roughly matches the fov of the "normal, medium, 1.0x" focal length camera on phones. You might need to crop your images and modify intrinsics to match that.
  2. When you do get your colmap poses, you'll need to scale them using a known measurement; the distancr between two cameras at capture is probably easiest. This is necessary as the cost volumes uses depth planes at metric locations.

The fact that you have a robotic arm might make life easier, from a reference pose measurement point of view.

ShivaniKamtikar commented 1 year ago

Thank you for the inputs! I'll give the ios-logger a try.