mp3guy / ElasticFusion

Real-time dense visual SLAM system
Other
1.77k stars 592 forks source link

How to create a surfel #206

Closed zhaozhongch closed 2 years ago

zhaozhongch commented 4 years ago

From the paper I know a surfel consists of position, normal, radius, weight, timestamp... However, I am curious how a surfel is created. Initially, we have an input image with 2D pixels and its correspondence depth. With these two we can get 3D points by project 2D pixel into the 3D world. Then I think a surfel should be constructed from those 3D points right? My guess is that the position of a surfel is based on a bunch of 3D points that are near to each other, the paper may use the average position as the surfel's position. But how to choose those points is a question. Those points are supposed to be on the same surface so that we can calculate its normal. Correct me if my guess is wrong, if it is correct then how to choose points that may be on the same plane? Also, how to initialize the radius and weight? Thanks!

a333klm commented 4 years ago

Hello,

Then I think a surfel should be constructed from those 3D points right? Yes, the surfels are constructed from those 3D points.

In their paper, they mention that their point fusion approach is based on Keller's approach: "Real-Time 3D Reconstruction in Dynamic Scenes Using Point-Based Fusion". Look in the subchapter 4.1 Data Association. They take an input pixel and check the points which are already in this area.

If they are too far away from the line between camera and pixel (viewing ray) they get discarded. If the normals of these points do not fit to the normal of the input pixel they get discarded. If there are more than 1 point left they pick the point which has been observed most often. If multiple points have been observed they pick the one which is closest to the viewing ray.

how to choose points that may be on the same plane?

I am not sure if the word "plane" makes sense here. They merge an existing point with a new observation.

Also, how to initialize the radius and weight?

A surfel should be big enough to cover holes between surfels in the visualization. Look in "Dense Planar SLAM". And the calculation of the weight is described in equation 10. This part is not obvious to me and I am not sure. I think the weight is not initialized like e.g. the radius. It is more like a placeholder. You take one surfel x. This surfel has a corresponding node in the deformation graph. So does every surfel. Then a loop closure happens and your surfel x (among others) is deformed. You assign your 1,2,3 or 4 neighbors a weight according to their distance to x. Then you calculate the new position.

This is what I read in the paper and what I understood. Correct me if I am wrong. We can talk about it. It helps me too.

Viky397 commented 3 years ago

Hello! Thank you for the useful comments about the paper. Question: I have .klg files from my own dataset. However, when I run ElasticFusion -l , the GUI opens up and I can see my frames at the bottom, but no points appear and nothing seems to be happening. Any help is appreciated, thanks!

a333klm commented 3 years ago

Maybe there is a playbutton hidden. I would compare your klg file with other klg files that work if that's possible.

Viky397 commented 3 years ago

Thank you for the feedback. I've compared it to the living room dataset from ICL-NUIM and they are identically formatted, with the annotations.txt, rgb and depth folders. As well, when I play the ICL-NUIM klg (once I generate it), the depth is automatically visible in the GUI, whereas mine is not. Would you kindly be able to point me within the ElasticFusion code where it reads in the depth images (if at all possible). Perhaps I can do some digging. Thank you!

a333klm commented 3 years ago

https://github.com/mp3guy/ElasticFusion/blob/8a60ca8e9d084be46e7130cbb7d7aa5d45b787af/GUI/src/Tools/RawLogReader.cpp I think thats for reading files. ElasticFusion Core receives only the images. ElasticFusion GUI reads the files or the camera input.

Maybe look also in ElasticFusion.cpp processFrame

Viky397 commented 3 years ago

Thank you! I believe that it was because I didn't set my -depth larger, as my images are from a warehouse setting, so the default 3m is too small for them. i set -d to 100 and can visualize depth now. thanks!

Viky397 commented 3 years ago

Hello. My data is from a robot moving along the ground. However, when I run it through Elastic Fusion, the output trajectory veers off into the z-axis. This is impossible as the robot stays flat against the ground the entire time. I was wondering if there is any way for me to force ElasticFusion to only work in the XY plane and not optimize over Z. So keep z=0 the whole time. Thank you!

Viky397 commented 3 years ago

Hello. I have been trying to pass in ground truth poses, but when ElasticFusion runs, the camera frame stays static in the GUI, instead of following the trajectory I'm providing. This is what the first line of my pose.txt looks like: 1, -0.08755022111590, 0.08170161531600, 0.00000000000000, 0.00000000000000, 0.00000000000000, 0.01744086034340, 0.99984789662800 Thank you

a333klm commented 3 years ago

I have not used that function yet. I think I would check two things:

That's what I would do. Maybe that does not make sense.

Viky397 commented 3 years ago

Hello, thank you for your reply!

There is no button in the GUI that activates following the camera. As well, I was sure to check that the format of my pose.txt aligns with what is needed by ElasticFusion. time/framenumber, x, y, z, qx, qy, qz, qw

I even tried running it with the ICL-NUIM dataset and their poses, but the GUI stays static as well, so it must not be my data at fault but something in the code...