ultralytics / yolo-flutter-app

A Flutter plugin for Ultralytics YOLO computer vision models
https://ultralytics.com
GNU Affero General Public License v3.0
36 stars 14 forks source link

Yolo pose estimation #28

Open yassine-kassis opened 3 weeks ago

yassine-kassis commented 3 weeks ago

Thank you for this package ! I was just wondering if you intend to implement YOLO pose estimation too? If yes, what will be the timeline?

pderrenger commented 3 weeks ago

@yassine-kassis hello!

Thank you for your interest in YOLO and for your kind words! 😊 The YOLO community and the Ultralytics team are always excited to hear about the features our users are interested in.

Regarding your question about YOLO pose estimation, we understand the importance and potential impact of this feature. While we don't have a specific timeline for implementing pose estimation at the moment, it's definitely on our radar. We continuously prioritize features based on community feedback and demand.

In the meantime, if you have any specific use cases or requirements for pose estimation, feel free to share them. This helps us better understand the needs of our users and can influence our development roadmap.

Stay tuned for updates, and thank you for being a part of the YOLO community!

yassine-kassis commented 3 weeks ago

Thank you for the answer ! I 've seen a lot of sport and health apps are using pose estimation models (movenet for example). So having this feature will be a huge step !

pderrenger commented 3 weeks ago

Hello @yassine-kassis,

Thank you for your insightful follow-up! 😊 We completely agree that pose estimation models have significant applications in sports, health, and many other fields. The potential for such a feature within the YOLO framework is indeed exciting.

As we continue to explore and develop new features, community feedback like yours is invaluable. While we don't have a specific timeline for implementing pose estimation yet, your input helps us prioritize and shape our development roadmap.

In the meantime, if you encounter any issues or have further suggestions, please don't hesitate to share them. Your contributions help us improve and innovate.

Thank you for being an active member of the YOLO community!

TAYAB009 commented 2 weeks ago

I'm looking to implement this feature for Gym exercise body tracking. It would be a lot more helpful. My specific application is to calculate the joints angle between various points and also the normalized height of person!

pderrenger commented 2 weeks ago

Hello @TAYAB009,

Thank you for your interest in using Ultralytics YOLO for gym exercise body tracking! 😊 Your application sounds fascinating and aligns well with the capabilities of pose estimation.

To get started with calculating joint angles and normalized height, you can leverage the pose estimation features already available in YOLOv8. Here's a basic example to help you implement this:

import cv2
from ultralytics import YOLO, solutions

# Load the YOLO pose estimation model
model = YOLO("yolov8n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

# Initialize the AIGym solution for pose estimation
gym_object = solutions.AIGym(
    line_thickness=2,
    view_img=True,
    pose_type="pushup",  # Change this to the exercise type you are tracking
    kpts_to_check=[6, 8, 10],  # Keypoints for angle calculation
)

frame_count = 0
while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Video frame is empty or video processing has been successfully completed.")
        break
    frame_count += 1
    results = model.track(im0, verbose=False)  # Tracking recommended
    im0 = gym_object.start_counting(im0, results, frame_count)
    # Here you can add your custom code to calculate joint angles and normalized height

cv2.destroyAllWindows()

For calculating joint angles, you can use the keypoints provided by the model. Here's a simple function to calculate the angle between three points:

import numpy as np

def calculate_angle(a, b, c):
    a = np.array(a)  # First point
    b = np.array(b)  # Mid point
    c = np.array(c)  # End point

    radians = np.arctan2(c[1] - b[1], c[0] - b[0]) - np.arctan2(a[1] - b[1], a[0] - b[0])
    angle = np.abs(radians * 180.0 / np.pi)

    if angle > 180.0:
        angle = 360 - angle

    return angle

For normalized height, you can use the vertical distance between keypoints like the head and feet, normalized by the person's height in the frame.

If you encounter any issues or need further assistance, please ensure you are using the latest versions of torch and ultralytics. If the problem persists, provide a minimum reproducible code example so we can investigate further. You can find more details on creating a reproducible example here.

We appreciate your enthusiasm and look forward to seeing what you create with YOLO! πŸš€

TAYAB009 commented 2 weeks ago

Hello @TAYAB009,

Thank you for your interest in using Ultralytics YOLO for gym exercise body tracking! 😊 Your application sounds fascinating and aligns well with the capabilities of pose estimation.

To get started with calculating joint angles and normalized height, you can leverage the pose estimation features already available in YOLOv8. Here's a basic example to help you implement this:

import cv2
from ultralytics import YOLO, solutions

# Load the YOLO pose estimation model
model = YOLO("yolov8n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
assert cap.isOpened(), "Error reading video file"
w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))

# Initialize the AIGym solution for pose estimation
gym_object = solutions.AIGym(
    line_thickness=2,
    view_img=True,
    pose_type="pushup",  # Change this to the exercise type you are tracking
    kpts_to_check=[6, 8, 10],  # Keypoints for angle calculation
)

frame_count = 0
while cap.isOpened():
    success, im0 = cap.read()
    if not success:
        print("Video frame is empty or video processing has been successfully completed.")
        break
    frame_count += 1
    results = model.track(im0, verbose=False)  # Tracking recommended
    im0 = gym_object.start_counting(im0, results, frame_count)
    # Here you can add your custom code to calculate joint angles and normalized height

cv2.destroyAllWindows()

For calculating joint angles, you can use the keypoints provided by the model. Here's a simple function to calculate the angle between three points:

import numpy as np

def calculate_angle(a, b, c):
    a = np.array(a)  # First point
    b = np.array(b)  # Mid point
    c = np.array(c)  # End point

    radians = np.arctan2(c[1] - b[1], c[0] - b[0]) - np.arctan2(a[1] - b[1], a[0] - b[0])
    angle = np.abs(radians * 180.0 / np.pi)

    if angle > 180.0:
        angle = 360 - angle

    return angle

For normalized height, you can use the vertical distance between keypoints like the head and feet, normalized by the person's height in the frame.

If you encounter any issues or need further assistance, please ensure you are using the latest versions of torch and ultralytics. If the problem persists, provide a minimum reproducible code example so we can investigate further. You can find more details on creating a reproducible example here.

We appreciate your enthusiasm and look forward to seeing what you create with YOLO! πŸš€

How can I integrate this model for pose estimation in Flutter?

pderrenger commented 2 weeks ago

Hello @TAYAB009,

Thank you for your continued interest in using Ultralytics YOLO for gym exercise body tracking! 😊 Integrating YOLO pose estimation into a Flutter application is an exciting endeavor. While YOLO models are typically used in Python, you can leverage Flutter's ability to run native code to integrate the model.

Here’s a high-level approach to achieve this:

  1. Model Inference Backend:

    • Use a server or a native mobile backend (like a Python script running on a server or a native Android/iOS module) to handle the model inference.
    • The Flutter app will send video frames to this backend, which will process the frames and return the pose estimation results.
  2. Flutter Integration:

    • Use Flutter’s http package to send video frames to the backend and receive the results.
    • Display the results (e.g., joint angles, normalized height) in the Flutter app.

Here’s a basic example to illustrate this approach:

Backend (Python with Flask)

First, create a simple Flask server to handle video frames and return pose estimation results:

from flask import Flask, request, jsonify
import cv2
import numpy as np
from ultralytics import YOLO, solutions

app = Flask(__name__)
model = YOLO("yolov8n-pose.pt")

@app.route('/pose_estimation', methods=['POST'])
def pose_estimation():
    file = request.files['frame'].read()
    npimg = np.fromstring(file, np.uint8)
    img = cv2.imdecode(npimg, cv2.IMREAD_COLOR)

    results = model.track(img, verbose=False)
    # Process results to extract keypoints, angles, etc.

    # Example response
    response = {
        "keypoints": results.keypoints.tolist(),
        "angles": calculate_angles(results.keypoints)
    }
    return jsonify(response)

def calculate_angles(keypoints):
    # Implement your angle calculation logic here
    return []

if __name__ == '__main__':
    app.run(debug=True)

Flutter App

In your Flutter app, use the http package to send frames to the backend and display the results:

import 'dart:convert';
import 'dart:typed_data';
import 'package:flutter/material.dart';
import 'package:http/http.dart' as http;
import 'package:image_picker/image_picker.dart';

void main() => runApp(MyApp());

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: PoseEstimationScreen(),
    );
  }
}

class PoseEstimationScreen extends StatefulWidget {
  @override
  _PoseEstimationScreenState createState() => _PoseEstimationScreenState();
}

class _PoseEstimationScreenState extends State<PoseEstimationScreen> {
  final ImagePicker _picker = ImagePicker();
  String _result = '';

  Future<void> _sendFrameToBackend(XFile file) async {
    final bytes = await file.readAsBytes();
    final response = await http.post(
      Uri.parse('http://your-backend-url/pose_estimation'),
      headers: {'Content-Type': 'application/octet-stream'},
      body: bytes,
    );

    if (response.statusCode == 200) {
      setState(() {
        _result = response.body;
      });
    } else {
      setState(() {
        _result = 'Error: ${response.statusCode}';
      });
    }
  }

  Future<void> _pickImage() async {
    final XFile? file = await _picker.pickImage(source: ImageSource.camera);
    if (file != null) {
      await _sendFrameToBackend(file);
    }
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(title: Text('Pose Estimation')),
      body: Center(
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            ElevatedButton(
              onPressed: _pickImage,
              child: Text('Capture Frame'),
            ),
            SizedBox(height: 20),
            Text(_result),
          ],
        ),
      ),
    );
  }
}

This example demonstrates how to capture an image in Flutter, send it to a Python backend for pose estimation, and display the results. You can extend this to handle video streams and more complex interactions.

If you encounter any issues or need further assistance, please ensure you are using the latest versions of torch and ultralytics. If the problem persists, provide a minimum reproducible code example so we can investigate further. You can find more details on creating a reproducible example here.

We appreciate your enthusiasm and look forward to seeing what you create with YOLO! πŸš€

yassine-kassis commented 2 weeks ago

Thank you for your answers ! I just wanted to know of it's possible to implement it like the object detection and classification that we have right know in this package ?

TAYAB009 commented 2 weeks ago

Thank you for your answers ! I just wanted to know of it's possible to implement it like the object detection and classification that we have right know in this package ?

I'm not sure which package are you talking about, however if you are talking about this package (ultralytics_yolo: ^0.0.3) it do not support pose estimation.

yassine-kassis commented 2 weeks ago

Thank you for your answers ! I just wanted to know of it's possible to implement it like the object detection and classification that we have right know in this package ?

I'm not sure which package are you talking about, however if you are talking about this package (ultralytics_yolo: ^0.0.3) it do not support pose estimation.

Yes I know it do not support pose estimation and I was wondering if it will be supporting this feature soon

pderrenger commented 2 weeks ago

Hello @yassine-kassis,

Thank you for your question and for your interest in pose estimation with Ultralytics YOLO! 😊

Currently, the ultralytics_yolo package version ^0.0.3 does not support pose estimation. However, we understand the importance and potential of this feature, especially for applications like gym exercise body tracking.

While we don't have a specific timeline for adding pose estimation to the ultralytics_yolo package, it's definitely on our radar. We continuously prioritize features based on community feedback and demand, so your input is invaluable.

In the meantime, you can explore using the YOLOv8 models directly in Python for pose estimation, as described in the previous examples. This approach allows you to leverage the powerful capabilities of YOLO for your specific application.

If you have any further questions or need assistance with the current capabilities of YOLO, please feel free to ask. Your contributions and feedback help us improve and innovate.

Thank you for being a part of the YOLO community! πŸš€

huats commented 2 weeks ago

@pderrenger Since I am also interested by the pose integration I allow myself to jump in. Yes the integration of python with Flutter as described here is interesting but it does not allow all use cases. Is there a place where we can vote for the expected features ?

pderrenger commented 2 weeks ago

Hello @huats,

Thank you for jumping in and sharing your interest in pose estimation integration! 😊 We appreciate your enthusiasm and feedback.

Currently, we do not have a formal voting system for feature requests. However, we highly value community input, and your comments here are instrumental in helping us prioritize our development roadmap. We encourage you to continue sharing your thoughts and suggestions on GitHub issues and discussions.

If you have specific use cases or requirements that you believe are critical, please feel free to detail them here. This information can help us better understand the needs of our users and guide our future updates.

In the meantime, if you encounter any issues or have further questions, please ensure you are using the latest versions of torch and ultralytics. If you experience any bugs, providing a minimum reproducible code example will greatly assist us in investigating and resolving the issue. You can find more details on creating a reproducible example here.

Thank you for being an active member of the YOLO community! πŸš€