ultralytics / ultralytics

NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
24.46k stars 4.86k forks source link

Use the track method in TensoRT #10147

Closed JustChenk closed 3 days ago

JustChenk commented 1 month ago

Search before asking

Description

How to directly call the MOT algorithm through the TensoRT format model and output the analysis results, as in the pytorch model eg:model.track(image).

I don't know if ultralytics supports this feature.

image

Use case

No response

Additional

No response

Are you willing to submit a PR?

glenn-jocher commented 1 month ago

Hello! Thanks for reaching out with your question about using the track method with a TensorRT model in YOLOv8. 🚀

Currently, direct support for multi-object tracking (MOT) with TensorRT models using the track method, similar to the ease of PyTorch (model.track(image)), is not inherently available within Ultralytics YOLOv8. While the tracking functionality is available for PyTorch models, additional steps are typically required for TensorRT due to the need for post-processing the output tensors specifically for tracking purposes.

To incorporate MOT with TensorRT, you would generally need to process the output of the TensorRT model separately to apply tracking. This involves:

  1. Running inference with the TensorRT model.
  2. Extracting detected objects' bounding boxes, classes, and scores from the output tensors.
  3. Feeding these detections to a tracking algorithm (e.g., sort, deep SORT, etc.) implemented in Python or another language compatible with your application.

While direct feature support for this workflow within Ultralytics YOLOv8 might not be available, your interest and willingness to contribute via a PR is greatly appreciated! This could be a valuable addition to the project.

For specific implementation or starting points, I'd recommend exploring external MOT implementations that can process detections and adapt them to work with TensorRT outputs. If you develop a solution and would like to contribute, please feel free to open a PR or share your approach in the issues section for further discussion.

Thanks for contributing to the YOLOv8 community, and please don't hesitate to share your progress or ask for assistance as you work on integrating this feature! 🌟

github-actions[bot] commented 2 weeks ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐