Open aamir0011 opened 2 weeks ago
👋 Hello @aamir0011, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.
Pip install the ultralytics
package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.
pip install ultralytics
YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
@aamir0011 hello! It's great to hear you've successfully deployed YOLOv8 for object detection on Android. 😊 For object tracking, you can consider exporting the YOLO model to ONNX and using it with an Android-compatible inference framework like OpenCV DNN
, which recently added support for ONNX models. Here's a simplified guide:
Export YOLOv8 model to ONNX format:
yolo export model=yolov8n.pt format=onnx
Integrate the ONNX model with OpenCV in your Android app. You'd need OpenCV Android SDK installed in your project. Here's a rough snippet on how you might load and use the model for inference:
// Load the ONNX model using OpenCV DNN
String modelPath = "path_to_your_model.onnx";
Net net = Dnn.readNetFromONNX(modelPath);
// Assuming 'frame' is your input image
Mat blob = Dnn.blobFromImage(frame, 1.0 / 255, new Size(640, 640), new Scalar(0, 0, 0), true, false);
net.setInput(blob);
// Forward pass
Mat detections = net.forward();
For tracking, post-process the detection outputs to maintain object identities across frames. You might need to implement or integrate a lightweight tracking algorithm suitable for mobile devices, such as SORT or a simple centroid-based tracker. Due to the complexity and additional computation, consider the performance and battery impact on the device.
While this approach simplifies the concept, actual implementation might require more steps, and performance optimizations, especially for real-time applications on Android devices.
Feel free to explore more and adjust based on your specific needs. Good luck with your project!
Search before asking
Question
I want to do object detection and tracking on android. I am successful in object detection but i dont know how to do tracking please guide Thanks.
Additional
No response