Closed nawara72 closed 2 years ago
It is located here https://github.com/intel-isl/Open3D/blob/master/examples/cpp/VoxelHashingGUI.cpp using the stanford lounge dataset: https://github.com/intel-isl/Open3D/blob/master/examples/python/reconstruction_system/scripts/download_stanford.sh#L15
We will make it more accessible soon.
will this be ported to python as well @theNded ?
Technically all the components are available in python, e.g. https://github.com/intel-isl/Open3D/blob/master/examples/python/reconstruction_system/tintegrate_scene.ipynb for TSDF voxel grids, but I ran out of time to pythonize everything especially the GUI.
Hopefully in the next release I will be able to refactor and pythonize everything, including this real-time system and the gpu accelerated offline reconstruction system.
Hi. I have successfully tested the offline 3D reconstruction system example, however, I have trouble making it real time. Hopefully there will be a real time 3D reconstruction system example available soon..
Will it be available in Python?
Will it be available in Python?
Unfortunately no for this release. We will target at next release for a full upgrade for both the online and the offline system with python.
Will it be available in Python?
Unfortunately no for this release. We will target at next release for a full upgrade for both the online and the offline system with python.
Thank you for the information.
I've built the 0.13.0 release from source. Running VoxelHashingGUI on CPU(i7-8700) is getting around 7 FPS using the lounge data set. Does that match up with expectations?
I've built the 0.13.0 release from source. Running VoxelHashingGUI on CPU(i7-8700) is getting around 7 FPS using the lounge data set. Does that match up with expectations?
For CPU yes, since ray casting is time-consuming. GPU should be at least 30Hz.
I've built the 0.13.0 release from source. Running VoxelHashingGUI on CPU(i7-8700) is getting around 7 FPS using the lounge data set. Does that match up with expectations?
For CPU yes, since ray casting is time-consuming. GPU should be at least 30Hz.
Hello, I want to know whether this real-time 3D reconstruction has a loop detection function? How to deal with the cumulative error?
Is the input of this VoxelHashingGUI only rgb and depth? If I have cloud data and the corresponding 2d image, can I use this project?
Is the input of this VoxelHashingGUI only rgb and depth? If I have cloud data and the corresponding 2d image, can I use this project?
We rely on projective data association both in odometry and TSDF integration, so unfortunately a depth image is required.
Is the input of this VoxelHashingGUI only rgb and depth? If I have cloud data and the corresponding 2d image, can I use this project?
We rely on projective data association both in odometry and TSDF integration, so unfortunately a depth image is required.
Okay, thank you, why not add loopback detection and loopback optimization?
I can not find the file from the link https://github.com/intel-isl/Open3D/blob/master/examples/cpp/VoxelHashingGUI.cpp, How can i get the usage for the realtime reconstruction? Ths.
Check this example, I think it does the same thing: https://github.com/isl-org/Open3D/blob/master/examples/cpp/DenseSLAMGUI.cpp
Please refer to the tutorial for latest changes -- the same instructions apply to the C++ example. Closed for now. Feel free to open new issues.
I get the "Unimplemented device 'CUDA:0'" error when I run DenseSLAMGUI on my computer with an Intel graphics card. For details see attached photo. To run DenseSLAMGUI (realtime 3d reconstruction) do I need to use a computer equipped with Nvidia and CUDA graphics cards?
Try adding a device flag and switching to CPU:0 or just CPU, you will need an Nvidia GPU to run in CUDA mode, but normal CPU mode should work without an Nvidia GPU.
@stanleyshly Thanks for your support. By setting "--device CPU:0" I was able to run the 3d reconstruction program. It seems the real-time 3D reconstruction works only with available data and that means it's not "live" with frame by frame.
But if it can live can you show me how to do it?
Yes, it's technically not live, but it does process the data in real time. You could probably modify the program to run live off a sensor, but that is out of scope of this issue.
@stanleyshly Oke. Thank you.
I use captured dataset from legacy reconstruction system as input for DenseSLAMGUI to run 3d reconstruction but got the "gels failed in SolveCPU: singular condition detected" error.
DenseSLAMGUI realsense --device CPU:0
The "realsense" parameter in the above command is the dataset captured from the legacy reconstruction system.
Can someone help me with the this problem?
@nawara72
The python version of live 3D reconstruction DEMO showed on Youtube is execuate at: Open3D\examples\python\t_reconstruction_system>dense_slam_gui.py
data used in the DEMO is at: Open3D\examples\test_data\RGBD\
@bqthanh I encounter the same error "DecodeAndSolve6x6" with my realsense l515 , unfortunately I couldn't fix it . I suspect depth_scale parameter is culprit as reconstruction system always uses value 1000, but realsense has 4000 thus producing this error "TransformationConverter.cpp:145: Singular 6x6 linear system detected, tracking failed."
If you are using the legacy pipeline, depth_scale
can be directly added to the config file since the key is supported in the initializer: https://github.com/isl-org/Open3D/blob/master/examples/python/reconstruction_system/initialize_config.py#L53
If you are using the dense slam CLI (python or cpp), it can be specified by --depth_scale
.
If you are using the dense slam GUI (python or cpp), it can be interactively adjusted upon launch in the GUI.
Hi @theNded , I'm aware of this parameter and I adjusted it properly. But it crashes every time with dataset from l515 and streams :
config.enable_stream(rs.stream.depth, 1024, 768, rs.format.z16, 30) config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
Could you try to start with objects closer and move slowly? Current dense SLAM is not yet optimized for tracking and it could be vulnerable to changes.
If it still doesn't work please share a short sequence of the data till failure and let me try to debug.
Hi thank you for prompt response, I created issue #4678 with sample data.
Yes, it's technically not live, but it does process the data in real time. You could probably modify the program to run live off a sensor, but that is out of scope of this issue.
How would I go about modifing the program to run live off a sensor so that it reads data directly from RealSense camera and reconstruct/map them in runtime?
Hi
Is there any way to run the example shown in the video showing the open3d 0.13.0 highlights (attached screenshot)?. Would be great if i can run the same example. Thanks