Closed sctrueew closed 2 years ago
The reconstruction system assumes a relatively high-quality depth map from sensors. While MiDAS provides nice visualization and reasonable depth from monocular images, according to my experience it is very likely to fail in the reconstruction system.
If you want to give it a try anyways, save the depths from the RGB images in a separate depth
folder and follow the instructions in the tutorial. Be aware of several caveats:
depth_scale
factor that converts them from the uint16 pixel value to the metrical value.@theNded Hi,
Thanks for the reply, I've put my depth frames in a folder. what example should I use? https://github.com/isl-org/Open3D/blob/master/examples/python/gui/online-processing.py Is this example works for me?
You should be looking at this tutorial.
Hi everyone,
I don't have an RGBD camera like intel realsense but I'm using a GoPro camera after the recording I extract depth with Midas(deep-learning) and save RGB and Depth frames. now I'd like to generate 3d Point cloud reconstruction like below the picture.
or this example:
my result are: Frame:
Depth normalization:
Depth:
How can I do that?
Thanks in advance