Closed RobertLeahy closed 8 years ago
@jordan-heemskerk: I looked into the test which takes a frame, copies it, transforms one copy into global space, and then runs dynfu::kinect_fusion_opencl_pose_estimation_pipeline_block
on these frames and expects to get back exactly the initial T_gk.
To do this I updated dynfu::kinect_fusion_eigen_pose_estimation_pipeline_block
for easier debugging.
I then investigated which UV coordinates were being selected for each pixel on the first iteration. Given that we start with T_gk as our "guess" for T_z for each point in the previous frame (which is in global space) we transform it into camera space by left multiplying it by T_gk.
When we transform this point into pixel space it would stand to reason that we obtain exactly the same pixel, would it not?
However this isn't the case: A large number of pixels obtain different pixels from themselves upon transformation into pixel space.
For example: At index 321 (x = 321, y = 0) the position mapped to x = 320, y = 45 in pixel space.
Wouldn't this suggest that something is awry in the measurement pipeline block? The positions it's generating don't map as we expect to pixel coordinates?
Note that I verified that the updated dynfu::kinect_fusion_eigen_pose_estimation_pipeline_block
is producing the same erroneous matrix in response to this test.
See a004445146bacfefb126a23efa3383df12213b30.
Turns out we were setting original made up values for the bilateral filter... I'm going to play with them now and see if there is any effect
However this isn't the case: A large number of pixels obtain different pixels from themselves upon transformation into pixel space.
For example: At index 321 (x = 321, y = 0) the position mapped to x = 320, y = 45 in pixel space.
How many are like this?
It looks like the measurement pipeline block can emit points at [0 0 0], which makes no sense.
Address these tests: Make them pass.