felixchenfy / Realtime-Action-Recognition

Apply ML to the skeletons from OpenPose; 9 actions; multiple people. (WARNING: I'm sorry that this is only good for course demo, not for real world applications !!! Those ary very difficult !!!)
MIT License
875 stars 256 forks source link

how to run the real-time demo from your web camera? #9

Closed lightwithshadow closed 4 years ago

lightwithshadow commented 4 years ago

Hey Chen!

First of all, thanks for your contribution on this project. I have the following questions to ask you:

1) In the testing phase, how to call the camera? 2) In the stage of test it on webcam, is there any code you can refer to?

looking forward to your reply! Ths again!

yours, light

felixchenfy commented 4 years ago

@lightwithshadow Hi,

  1. For how to call the camera, please see this: https://github.com/felixchenfy/Realtime-Action-Recognition#3-how-to-run-testing
    The command is:

    python src/run_detector.py --source webcam

  2. For how to read data from web camera, you may refer to the class DataLoader_WebCam in this file: https://github.com/felixchenfy/Realtime-Action-Recognition/blob/master/src/mylib/io.py
    Or, you may search cv2.VideoCapture(0) on google to know how to use it.

felixchenfy commented 4 years ago

@lightwithshadow Hi, I just refactored the code to make it more readable, as well as making the API easier to use. The new commands for running inference are:

Test on video file

python src/s5_test.py \
    --model_path model/trained_classifier.pickle \
    --data_type video \
    --data_path data_test/exercise.avi \
    --output_folder output

Test on a folder of images

python src/s5_test.py \
    --model_path model/trained_classifier.pickle \
    --data_type folder \
    --data_path data_test/apple/ \
    --output_folder output

Test on web camera

python src/s5_test.py \
    --model_path model/trained_classifier.pickle \
    --data_type webcam \
    --data_path 0 \
    --output_folder output