ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
48.7k stars 15.93k forks source link

Merge Yolov5 with LSTM for Human Activity Recognition task #12769

Closed alina15andreeva closed 3 months ago

alina15andreeva commented 4 months ago

Search before asking

Question

Hello! I want to know if it is possible to somehow merge Yolov5 with LSTM for Human Activity Recognition task. Yolov5 should be trained to detect certain objects on the video and LSTM should be able to recognize an action being performed. I already have a trained LSTM model but I wish to increase the accuracy by introducing the presence of certain objects typical for certain kind of actions. Does anyone can help me with that? I am new to this and I am not sure how and whether this can be implemented at all.

Additional

No response

github-actions[bot] commented 4 months ago

👋 Hello @alina15andreeva, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics
glenn-jocher commented 4 months ago

@alina15andreeva hello! 🌟

Absolutely, integrating YOLOv5 with an LSTM for Human Activity Recognition is a feasible and exciting approach. You can use YOLOv5 to detect objects in each frame of the video, and then feed the detection results (like object classes and possibly their bounding box coordinates) as a sequence into the LSTM to recognize actions over time.

Here's a simplified workflow:

  1. Object Detection: Use YOLOv5 to detect objects in each frame.
  2. Data Preparation: Prepare the detection results in a format suitable for the LSTM. This might involve encoding the detected objects and their properties into a numerical format.
  3. Action Recognition: Feed the prepared data into your LSTM to classify the action.

For integrating these components, you might need to write custom code that bridges the output of YOLOv5 with the input requirements of your LSTM model.

Remember, the key to success in such a project is experimentation. Try different ways of representing the YOLOv5 detections for the LSTM, and see what works best for your specific use case.

For more details on using YOLOv5, you can always refer to our documentation at https://docs.ultralytics.com/yolov5/.

Best of luck with your project, and feel free to reach out if you have more questions! 🚀

github-actions[bot] commented 3 months ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐