LocalJoost / BlogComments

This repo is solely used for comments on https://localjoost.github.io/
4 stars 0 forks source link

Comments on "Running a YoloV8 model on a Magic Leap 2 to recognize objects in 3D space" #459

Open LocalJoost opened 1 year ago

LocalJoost commented 1 year ago

Original article: https://localjoost.github.io/Running-a-YoloV8-model-on-a-Magic-Leap-2-to-recognize-objects-in-3D-space/

JCMazza commented 1 year ago

Hi, I was wondering would this work on a Quest pro or 3 by any chance??

LocalJoost commented 1 year ago

@JCMazza nope. No chance. Meta doesn't give Camera access. Privacy reasons. Which is kind of hilarious as you take Meta's history into account 😉

JCMazza commented 1 year ago

Thanks for the quick reply. Yer I kept reading we couldn't access camera on the Quest but was maybe hoping you have been able to work some magic with it the past lol.

So ideally if a device lets us access camera feed then in theory we could make it work? I heard the Pico 4 enterprise or Lynx R1 gives us some sort of access to the camera so do you think what you did here for ML2 and hololens may work? Also your examples here are they purely for an MRTK setup?

Finally I wanted to ask can we extend beyond just recognizing and labeling an object with the Yolo training model? Like for example perhaps tracking the object position and rotation in the physical space and overlay digital twins over it or is the Yolo.

Sorry for all the questions and you don't have to answer me if you wish to, I am simply trying to gather as much information as possible as we are currently doing some research on object detection for a potential project and are trying to conclude what is the best hardware for this (other than hololens2 and ML2) mainly because of accessibility concerns.

LocalJoost commented 1 year ago

I am not familiar with eiter Pico or Lynx. If they give you camera access and create a spatial map you can shoot raycasts at, then it might work. No, my setup is not purely for MRTK. I just took MRTK as it is a convenient and well-known starting point for me. Yolo is basically meant to recognize objects in 2D images. I don't think it's able to recognize 3D positon tracking. It's definitely no Vuforia. I have demo's of ML models actually capable of doing that, but I doubt very much they would be able to run on a HoloLens locally. I think you then would have to have some powerful computer in the cloud doing the heavy lifting for you. I could only run nano models of HoloLens (and Magic Leap 2) with some kind of performance, and even then I could only get about 4 recognitions per second And no worries about questions. I like helping fellow devs, that's what makes me an MVP. Just don't think me as a some fountain of wisdom, I ask Google stuff all the time too ;)

laalm99 commented 9 months ago

Hello, I noticed that you used the WebCamTexture for the HoloLens2, from my experience using this library it can only take from the PC's webcam. In your case did the textures come from the HL device's camera?

LocalJoost commented 9 months ago

It uses HoloLens web cam. the thing that is right about your nose, in the top front center of the device

laalm99 commented 9 months ago

Thank you!