Open shahanhyperspace opened 2 months ago
Oh that would be very cool but unfortunately it's a lot more advanced than MediaPipe is capable of.
MediaPipe can detect poses in a single video frame but it has no ability to track a specific person across multiple frames or cameras.
The way you could approach it wiuld be to run 3x MediaPipe instances (1 for each camera), then use some OpenCV in Python to process the detected keypoints and combine them into one single set of 3D coordinates, but it's uh... a bit technical
Is there anything else that you would suggest to track users and there positions at runtime? I really dont want to use Kinect Azure or any depth sensor.
Is it possible to use multiple cameras for media pipe? I have 2-3 cameras all lined up one after the other and then tracking objects. I dont want to run 3 different programs on 3 different PCs, was trying to sync everything in one.
I tried to merge all my camera feeds into one, but is there a way i can give the merged camera feeds into mediapipe?
The idea is to track a user walking along a wall, the wall has cameras inside to track the user, if the cameras(web cams) are all merged together as one camera feed as shown in the picture, then sycn will be easier for the person position.