Open Amir-Ramezani opened 7 years ago
In short: you can't. At least with the standard. It is from an additional paper proposed by the authors, but not available for public.
Okay, so you upload a video on youtube with some nice part added to it in order to attract people, then when people ask how did you do that, you answer is, "In short: you can't".
Well, in short: you did great! I am sure your package is very good, but the part that you put something in the video that is not actually exist in your package and according to what you say is impossible, you know, that is not good.
@AmirCognitive I understand your frustration, however generating such dense point cloud is not that hard for example use Fast fusion as it can directly be used on camera trajectory
PS. I am not a developer, but I would like to point out that it is open source project and I can say from my experience that there comes a time when you are not allowed or restricted to publish some essential piece of code in public.... I hope @laxnpander will agree
@rajputasif @AmirCognitive : I am not part of the Orb Slam project, just a random guy from the internet. Your frustration is a little bit out of place. Raulmur and his team published what they got, there is also a paper about the dense reconstruction if I remember right. But the code isn't availablt. You can't be angry about that. It might be too experimental for others to use, or millions of other legit reasons. In addition there are already multiple issues about dense reconstruction you could have used to inform yourself.
I have implemented PCL with Orb2 to produce pointcloud. It wont create the dense pointcloud, but it looks much better than the original points which enough for my project. One thing keep in mind that is the current system wont erase the keyframes and mappoints in map() class but only pointers. Hence, the output clouds that with error will also be generated.
I know it is very late but I found a script called generate_registered_pointcloud.py they made which is not documented which does exactly this, you can download it here https://vision.in.tum.de/data/datasets/rgbd-dataset/tools.
:gift: Edit: I updated the dense RGB-D reconstruction to compute the normals directly from the depth file, you can find my modified version here.
The documentation of the tool itself is not accurate as it is a copy and paste from another file. Also, the script has to be called from the folder of the dataset itself (otherwise the rbg.txt
and depth.txt
don't link to the good folder). So for example for me to call it, I did
~/dev/tests/ORB_SLAM2/Examples/RGB-D/rgbd_dataset_freiburg3_teddy$ python ../generate_registered_pointcloud.py --downsample 5 --nth 10 rgb.txt depth.txt ../../../CameraTrajectory.txt denseMap.ply
So the dense reconstruction on the teddy dataset produces something like this (here are three files, in color the dense cloud, in green the camera position, and in red the keypoints):
I create point cloud and it seems not good, is there anyone can help me? I don't know why
Thanks for your great package.
I can run it using ROS by camera using mono and RGBD.
However, I am not able to reproduce the last part of the following clip which is about "dense point-cloud reconstructions from estimated RGB-D key-frame poses".
https://www.youtube.com/watch?v=ufvPS5wJAx0&feature=youtu.be&t=2m2s
Could you give me some points on how to reconstruct the 3D model of the environment?