Zielon / 3DScanning

Kinect Fusion for Markerless Augumented Reality on CUDA
13 stars 1 forks source link

Normal maps #99

Closed juanraul8 closed 5 years ago

juanraul8 commented 5 years ago

Features:

Normal maps computed directly by depth maps:

normal_map

Normals computed using point cloud normals:

normal_map_point_cloud

Problem above was solved, see comments.

Future tasks:

Zielon commented 5 years ago

Why didnt you change computing normals in PointCloud? Can we fix it?

juanraul8 commented 5 years ago

Why didnt you change computing normals in PointCloud? Can we fix it?

I do not think the problem is the normal computation. They are similar.

@BarisYazici normals visualization using PCL looks good.

First image is computing normals directly from depth maps (I follow a tutorial). Our normals where computed after backprojection, maybe that is the problem.

juanraul8 commented 5 years ago

Ok, now I see what was the problem.

The problem was basically that m_normals does not have the grid structure. I also realize that normal computation does not seem to be same as paper formula.

Exercise 3 formula:

normal_map_exercise3

Kinect Fusion formula:

normal_map_paper

juanraul8 commented 5 years ago

Another comment, in exercise 3, the following prune is applied:

const float du = 0.5f * (depthMap[idx + 1] - depthMap[idx - 1]);
                const float dv = 0.5f * (depthMap[idx + width] - depthMap[idx - width]);
                if (!std::isfinite(du) || !std::isfinite(dv) || abs(du) > maxDistanceHalved || abs(dv) > maxDistanceHalved) {
                    normalsTmp[idx] = Vector3f(MINF, MINF, MINF);
                    continue;
                }

Maybe we are missing something like that.

Paper mentioned a validity mask, which can be used to know if the vertex map or normal map pixel is valid.