geaxgx / depthai_hand_tracker

Running Google Mediapipe Hand Tracking models on Luxonis DepthAI hardware (OAK-D-lite, OAK-D, OAK-1,...)
MIT License
398 stars 76 forks source link

Using laptop integrated camera #23

Open emoullet opened 1 year ago

emoullet commented 1 year ago

Hello,

I'm building an application for which I'd like to be able to switch between an OAK-D S2 and a standard, laptop integrated webcam. It is my understanding that it should possible (at the cost of several unavailable features) by running :

python demo.py -i '0'

But i get the following RunTime error :

Palm detection blob : /home/emoullet/GitHub/depthai_hand_tracker/models/palm_detection_sh4.blob Landmark blob : /home/emoullet/GitHub/depthai_hand_tracker/models/hand_landmark_lite_sh4.blob Traceback (most recent call last): File "/home/emoullet/GitHub/depthai_hand_tracker/demo.py", line 59, in <module> tracker = HandTracker( File "/home/emoullet/GitHub/depthai_hand_tracker/HandTracker.py", line 130, in __init__ self.device = dai.Device() RuntimeError: No available devices

Am I missing something on the arguments to pass for it to work with integrated camera ? Or is there some preparatory work to do on the cam (i.e. calibration) ?

Many thanks in advance, and hats off for this amasing tool !

geaxgx commented 1 year ago

Hi, Even if you want to process the video frames coming from your webcam, you still need the Myriad processor of your OAK-D to run the detection and landmark regression models. So you still need the OAK-D be plugged in. The RuntimeError: No available devices is the typical message you get when no OAK device is seen by the system.

emoullet commented 1 year ago

Thank you for your answer ! Is there any way to run the detection and landmark regression model directly on the computer ? I already use mediapipe but many of the additionnal features proposed in this repo are quite interesting, and if possible I would rather directly use them rather than re-implement them..

geaxgx commented 1 year ago

Using directly mediapipe is the best method to run the models on your computer. Another method is to convert the models to Openvino format and then run them on your Intel CPU. That is actually what I did in this older repo (https://github.com/geaxgx/openvino_hand_tracker) as a preliminary step when creating the repo dedicated to the OAK devices. So initially, https://github.com/geaxgx/depthai_hand_tracker was just a copy of https://github.com/geaxgx/openvino_hand_tracker where I replaced the code specific to Openvino by equivalent code specific to depthai. But because I was focused on OAKD, further modifications and new features were only done on this repo and the openvino repo was kind of abandoned.

udayzee05 commented 1 year ago

can you suggest a way to improve media pipe accuracy to detect hand from long distance in laptop using a method you used for OAK-D in your implementation , Thank you

geaxgx commented 1 year ago

@udayzee05 If you are working with mediapipe, you can directly use Holistic (https://google.github.io/mediapipe/solutions/holistic). It relies on blazepose to estimate the body pose. And the body pose gives the region where to look for the hands. I am using movenet instead of blazepose but the principle is similar.