Closed pablovela5620 closed 1 year ago
Sure, this is on my TODO. I'll ping when I have a short tutorial.
Please ping me too~ Thanks!
Hi, wanted to follow up on this.
Given that there's no publicly available way to extract the poses from scanniverse is there any date as to when this would be possible? I know that 3D scanner app currently has a way to export all data (depth maps/images/poses). Would it be possible to use their output?
its still not clear as to how to use the data from NeuralRecons dataset you've linked. Theres preprocessing steps that seem necessary that are not included when downloading the example data they provide. This is the dataformat that's provided. But it seems like the dataload requires having an images
and poses
folder. Is there a data wrangling script ya'll have that did this process?
Looks like this has to be run before hand to extract images/poses https://github.com/zju3dv/NeuralRecon/blob/cd047e2356f68f60adfb923f33572d712f64b57a/tools/process_arkit_data.py for anyone else that may be having issues getting this running with the example data
Thanks for doing some research yourself :D
3D Scanner app could be a good candidate, but I didn't want to limit people to using something that isn't LIDAR free.
There are modified functions that parse the dataset and spit it out in a format the ARKitDataset
class uses. They're in arkit_dataset.py
.
I'll see if I can quickly give you a script that uses those functions to process a scan.
There is now a quick readme data_scripts/IOS_LOGGER_ARKIT_README.md for how to process and run inference an ios-logger scan using the script at data_scripts/ios_logger_preprocessing.py.
I've been messing around with all the different dataset options, but I haven't totally understood how to use the Scaniverse app to generate the required input data. I've noticed that recently it was updated to include nonlidar-based phones.
I know a
raw data
mode gives the camera poses + depth information (from manydepth?), but I'm not sure how I can export it as it only allows for scan reprocessing. Any info on this would be super appreciated!Along with this, when I try to use the sample scene from the NeuralRecon dataset, it seems like there are some extra pre-processing steps missing that I don't fully understand. More clarity there would be helpful.
I would love to be able to use this depth generation method as an input to other methods (such as the Nerf/SDF based ones such from other works like MonoSDF/Neuris/GoSurf). So being able to use a custom video input would be really helpful!!
Thanks again.