Closed youngstu closed 3 years ago
Using 3D joint and human-computer interaction control, such as Nreal AR glasses.
https://www.youtube.com/watch?v=9LxOlsHu3r8&ab_channel=UploadVR
If there is no absolute depth, it is impossible to judge whether the button has been touched.
To get 3D coordinates in camera space, you need to calculate the absolute depth from hands to camera, which are not feasible for monocular methods, due to the depth ambiguity.
@youngstu An update: You can refer to this paper Camera-Space Hand Mesh Recovery via Semantic Aggregation and Adaptive 2D-1D Registration for obtaining absolute distance in camera space. One additional requirement is the availability of the camera intrinsics.
How to get 3D coordinates of camera coordinate system?
Considering that the absolute coordinates of hand are not obtained, how can 3d hand tracking be applied to augmented reality scenes such as AR glasses?