Open moahaimen opened 2 years ago
Hi,
Sorry for the delay. We were on holiday. For your problem you could take a different approach, easier than this code.
You could get the skeleton coordinates using Mediapipe (for example. It is very easy to use. Check: https://mediapipe.dev/). In addition, you could detect the object with YOLO, but you have to train it to recognize your object.
Finally, once you have the object coordinates and the skeleton wrist coordinates, you could analyze if the person is holding the object (maybe it is easier than using an LSTM but depends on the problem). You could just check the distance and position. If the action is over time, then you would need a recurrent network.
I hope I have been of some help.
Kind regards,
Jaime
De "moahaimen" @. Para "jaiduqdom/LRCN_OP_YOLO" @. Cc "Subscribed" @.*** Fecha Thu, 04 Aug 2022 08:16:56 -0700 Asunto [jaiduqdom/LRCN_OP_YOLO] Yolov5 for Human Action Recognition (Issue #1)
HI I saw your code when I was searching for a solution for my problem and before I try the code I just need to understand if the code will merge LSTM+YOLOv5 to object detection and Human Action Recognition my problem is I need to detect an object in a video and Recognize the Action of the human Holding the object is that possible thank you
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread. Message ID: @.***>
HI I saw your code when I was searching for a solution for my problem and before I try the code I just need to understand if the code will merge LSTM+YOLOv5 to object detection and Human Action Recognition my problem is I need to detect an object in a video and Recognize the Action of the human Holding the object is that possible thank you