CMU-Perceptual-Computing-Lab / openpose

OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation
https://cmu-perceptual-computing-lab.github.io/openpose
Other
30.33k stars 7.79k forks source link

OPENPOSE-PYTHON:How to use Stereo camera to read 3D data in pyopenpose #1119

Closed wangwwwwwwv closed 5 years ago

wangwwwwwwv commented 5 years ago

Could pyopenpose run the 3D module now?i i didn't find 3D-pyopenpose-readmd i'm searching...

Issue Summary

MY SYSTEM:UBUNTU18.04 I use the ZED camera. I have got the RGB and depth data (numpy) .But I wondered: 1、How to make the pyopenpose recognize the 3D data from ZED stereo camera? 2、could i input 2D data to pyopenpose and add the depth data(from stereo camera) into outputdata then? 3、Due to the Neural networks from openpose,some key-points were kept out but still displayed in image. Dose it affect the 3D model? I thought it would affects .

Type of Issue

Your System Configuration

OpenPose version: Latest GitHub code

  1. Non-default settings:

    • 3-D Reconstruction module added? (by default: no):openGL only?
    • Any other custom CMake configuration with respect to the default version? (by default: no):
  2. If Python API:

    • Python version: 3.6.7
    • Numpy version 1.16.0

Code:

import argparse
import os
import sys
from sys import platform
from PIL import Image
import cv2
import numpy as np
import pyzed.sl as sl

try:
    # Windows Import
    if platform == "win32":
        # Change these variables to point to the correct folder (Release/x64 etc.) 
        sys.path.append(dir_path + '/../../python/openpose/Release');
        os.environ['PATH']  = os.environ['PATH'] + ';' + dir_path + '/../../x64/Release;' +  dir_path + '/../../bin;'
        import pyopenpose as op
    else:
        sys.path.append('/home/wsq/Documents/openpose/build/python');
        from openpose import pyopenpose as op
except ImportError as e:
    print('Error: OpenPose library could not be found. Did you enable `BUILD_PYTHON` in CMake and have this Python script in the right folder?')
    raise e

params = dict()
params["model_folder"] = "/home/wsq/Documents/openpose/models"
params["3d"]="True"
params["number_people_max"]="1"
cap = cv2.VideoCapture(1)

opWrapper = op.WrapperPython()
opWrapper.configure(params)
opWrapper.start()

while(True):
    # Capture frame-by-frame
    ret, frame = cap.read()
     # Our operations on the frame come here

    datum = op.Datum()
    datum.cvInputData = frame

    opWrapper.emplaceAndPop([datum])

    cv2.imshow('frame',datum.cvOutputData3D)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

Error: Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0.

Error: Only 1 camera detected. The 3-D reconstruction module can only be used with > 1 cameras simultaneously. E.g., using FLIR stereo cameras (--flir_camera).

where could i append the camera name? I used zed Stereo camera

Does openpose recognize every stereo camera now? if doesn't, what should i do ? Thanks!!!!very much!!!!!!!!!!!!!!!!!!!!!!!! maybe i should read the readme more carefully.LOL

gineshidalgo99 commented 5 years ago

We do not support Zed, no clue how it works. Sorry!

mrbjkk commented 5 years ago

@wangwwwwwwv May I have your email? I am doing the same jobs as you.

deepslee commented 5 years ago

Could you contact me ? I have the same jobs as you .1242457494@qq.com

ahmedadamji commented 3 years ago

We do not support Zed, no clue how it works. Sorry!

Hello @gineshidalgo99, I am using openpose for computing the line of pointing from a person to an object. I am using an RGB-D camera in Gazebo simulation and it publishes depth data to a message topic. For my functionality, I need 3D key points of the human in the frame but when I use openpose It only gives me 2D keypoints as the camera is not FLIR, I am using Python for coding my implementation. would you know how I can configure openpose so that I can process depth data from simulation? If so could you please let me know as I have been struggling to find a relevant source?

gineshidalgo99 commented 3 years ago

See the FAQ and github closed issues for Kinnect and 3D, pretty much you can just read 3D from your camera using it's own API, checking the position where OpenPose detected the keypoints

ahmedadamji commented 3 years ago

See the FAQ and github closed issues for Kinnect and 3D, pretty much you can just read 3D from your camera using it's own API, checking the position where OpenPose detected the keypoints

Thank you! Checking the depth directly from the camera using the keypoints seems wise!