Closed mfussi66 closed 2 years ago
Here is a comparison between the measurements taken by projecting the steps on the human planes, and a simple euclidean between the two ankles:
We can see that the information is of course fused together, so if the ankles are sufficiently far apart, their distance is significant even if no step is taken. However, the distance is invariant wrt the person orientation, and the steps are measured fairly accurately (see the peaks from 35sec).
In this case we have accurate results only when the person faces the robot, since OpenPose and the skeletonRetriever struggle with occluded keypoints.
I think we could use the first solution to run the demos, and later verify with the other parties involved if the metrics can be considered useful. In the future, a kalman-based estimator could be used to improve the keypoints tracking.
Nice comparison 👍🏻
I think we could use the first solution to run the demos, and later verify with the other parties involved if the metrics can be considered useful. In the future, a kalman-based estimator could be used to improve the keypoints tracking.
I agree 👍🏻
Related PR to apply the first solution:
Now that the transformation of the skeleton keypoints and the projection on the human planes (sagittal, coronal, transverse) are implemented, it might be useful to test if the latter is actually an improvement. The projection of, i.e. the step length on the sagittal plane requires accurate placement of the shoulders, and the Realsense depth noise might corrupt this information. This issue will be a collector of tests on the subject, in order to draw conclusions and evaluate possible strategies.