microsoft / HoloLens2ForCV

Sample code and documentation for using the Microsoft HoloLens 2 for Computer Vision research.
MIT License
494 stars 144 forks source link

Please add access to feature points in research mode #114

Open trandzik opened 3 years ago

trandzik commented 3 years ago

It would be awesome if one could have access to tracked feature points in research mode on HL2. Having such access would greatly help with debugging / understanding what parts of the world could / couldn't be tracked by visible light cameras. By providing point cloud with currently tracked feature points (containing x,y,z {ideally also luminance value of tracked pixels} for each point) you would allow researcher / developer to see if HL2 has enough feature points in a specific area. This way it would be possible to decide whether will tracking work fine in certain area (or whether it might have difficulties due to low amount of feature points).

For example one could capture these point clouds in a museum where people will be wearing HL2 and then analyze them on a computer. It would be immediately clear where are "tracking weak spots" that need to be addressed by adding markers / more light. Also when adding such helper markers one could stream the point cloud to computer in real-time and see whether feature extraction recognizes them as new feature points. Without access to feature points it is very hard to determine overall tracking quality in challenging large scale environments.

For reference I am attaching look at point clouds with feature points from random SLAM algorithm.

P.S. I am not sure if this is the right place to post feature request for research mode, but I haven't found a better place so far. Also ms docs state: For HoloLens 2, use the issue tracker in the HoloLens2ForCV repository to post feedback.

jtomori commented 3 years ago

:+1: It would help with optimizing environment for good tracking and to identify edge cases.

guillermoacdc commented 3 years ago

It would be great to know if Hololens-SLAM is based on features, directSLAM or an hybrid between visual (direct/feature) inertial approaches