microsoft / Azure-Kinect-Sensor-SDK

A cross platform (Linux and Windows) user mode SDK to read data from your Azure Kinect device.
https://Azure.com/Kinect
MIT License
1.5k stars 620 forks source link

about green_screen example #1420

Open ponyzhou404 opened 3 years ago

ponyzhou404 commented 3 years ago

In /examples/green_screen/main.cpp line 226, Transformation tr_secondary_depth_to_main_color=tr_secondary_depth_to_secondary_color.compose_with(tr_secondary_color_to_main_color);

I think it should be, tr_secondary_depth_to_main_color=tr_secondary_color_to_main_color.compose_with(tr_secondary_depth_to_secondary_color);.

Due to the perivious rotation also effects next translation, the order of matrix multiplication can`t changed here. If changed, it will cause some problem like https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/803 demonstrated.

In this green_screen example, point transformation to secondary color camera coordinate first, so it should multiply tr_secondary_depth_to_secondary_color first.

wes-b commented 3 years ago

My transformation (matrix) skills are rusty, but I think you are effectively saying that in our compose_with structure that H1 H2 != H2 H1. Is that correct? That would effectively be the change you are suggesting, correct?

In this green_screen example, point transformation to secondary color camera coordinate first, so it should multiply tr_secondary_depth_to_secondary_color first.

Can you share more detail on this? A link would be fine. Are you able to test this and confirm it improves the results? Due to COVID testing from home is tricky.

ponyzhou404 commented 3 years ago

My transformation (matrix) skills are rusty, but I think you are effectively saying that in our compose_with structure that H1 H2 != H2 H1. Is that correct? That would effectively be the change you are suggesting, correct?

Yes,that is all I mean.

Can you share more detail on this? A link would be fine. Are you able to test this and confirm it improves the results? Due to COVID testing from home is tricky.

In the comment of line 226,it said // We now have the secondary depth to secondary color transform. We also have the transformation from the // secondary color perspective to the main color perspective from the calibration earlier. Now let's compose the // depth secondary -> color secondary, color secondary -> color main into depth secondary -> color main so the point transformates to secondary color camera coordinate first. And then make X the point (x,y,z), DSC_0032 as you can see, the order can`t be exchanged. So we should mutiply the tr_secondary_depth_to_secondary_color first.

And I rewrite green_screen to show an point cloud. screenshot-1605854884 the order in your sample show like this. It make an small angel error with two point cloud. screenshot-1605854607 and when I change the order of mutiplication, the angel disapear.

Sorry, English is not my native language and this paragraph may be difficult to understand.